docs: update versioning to v0.7.0 and document new features

- Standardize versioning from 0.7 to 0.7.0 across all guides and Docker tags
- Update Knowledge Base tool documentation with new query_knowledge_files and query_knowledge_bases (metadata search) tools
- Add documentation for the new Model Activity Chart in Evaluations
- Add WHISPER_MULTILINGUAL env var and update WHISPER_VAD_FILTER configuration details
This commit is contained in:
Classic
2026-01-09 20:26:37 +01:00
parent c00c26fd65
commit b3ff327390
9 changed files with 29 additions and 11 deletions

View File

@@ -27,6 +27,7 @@ Most of these settings can also be configured in the **Admin Panel → Settings
| `WHISPER_MODEL_DIR` | Directory to store Whisper model files | `{CACHE_DIR}/whisper/models` |
| `WHISPER_COMPUTE_TYPE` | Compute type for inference (see note below) | `int8` |
| `WHISPER_LANGUAGE` | ISO 639-1 language code (empty = auto-detect) | empty |
| `WHISPER_MULTILINGUAL` | Use the multilingual Whisper model | `false` |
| `WHISPER_MODEL_AUTO_UPDATE` | Auto-download model updates | `false` |
| `WHISPER_VAD_FILTER` | Enable Voice Activity Detection filter | `false` |

View File

@@ -93,6 +93,7 @@ For smaller models like Whisper, CPU mode often provides comparable performance
#### Poor Recognition Accuracy
- **Set the language explicitly** using `WHISPER_LANGUAGE=en` (uses ISO 639-1 codes)
- **Toggles multilingual support** — Use `WHISPER_MULTILINGUAL=true` if you need to support languages other than English. When disabled (default), only the English-only variant of the model is used for better performance in English tasks.
- **Use a larger Whisper model** — options: `tiny`, `base`, `small`, `medium`, `large`
- Larger models are more accurate but slower

View File

@@ -86,7 +86,17 @@ This is a sample leaderboard layout:
![Leaderboard Example](/images/evaluation/leaderboard.png)
### Topic-Based Reranking
### Model Activity Tracking
In addition to overall Elo ratings, you can now view a model's performance history through the **Model Activity Chart**. This feature provides a chronological view of how a model's evaluation has evolved over time.
- **Diverging Chart**: The chart displays wins (positive) and losses (negative) daily or weekly, giving you a clear visual indicator of the model's reliability over time.
- **Time Ranges**: You can toggle between different time horizons: **30 Days**, **1 Year**, or **All Time**.
- **Weekly Aggregation**: For longer time ranges (1Y and All), the data is automatically aggregated by week to provide a smoother, more readable trend.
To view the activity chart, click on a model in the Leaderboard to open its detailed evaluation modal.
![Model Activity Chart](/images/evaluation/activity-chart.png)
When you rate chats, you can **tag them by topic** for more granular insights. This is especially useful if youre working in different domains like **customer service, creative writing, technical support**, etc.

View File

@@ -37,7 +37,7 @@ You can also edit the LLM's response and enter your image generation prompt as t
:::info
**Legacy "Generate Image" Button:**
As of Open WebUI v0.7, the native "Generate Image" button (which allowed generating an image directly from a message's content) was removed. If you wish to restore this functionality, you can use the community-built **[Generate Image Action](https://openwebui.com/posts/3fadc3ca-c955-4c9e-9582-7438f0911b62)**.
As of Open WebUI v0.7.0, the native "Generate Image" button (which allowed generating an image directly from a message's content) was removed. If you wish to restore this functionality, you can use the community-built **[Generate Image Action](https://openwebui.com/posts/3fadc3ca-c955-4c9e-9582-7438f0911b62)**.
:::
## Restoring the "Generate Image" Button

View File

@@ -155,10 +155,11 @@ These models excel at multi-step reasoning, proper JSON formatting, and autonomo
| `fetch_url` | Visits a URL and extracts text content via the Web Loader. | Part of Web Search feature. |
| **Knowledge Base** | | |
| `list_knowledge_bases` | List the user's accessible knowledge bases with file counts. | Always available. |
| `query_knowledge_bases` | Search knowledge bases by semantic similarity to query. Finds KBs whose name/description match the meaning of your query. Use this to discover relevant knowledge bases before querying their files. | Always available. |
| `search_knowledge_bases` | Search knowledge bases by name and description. | Always available. |
| `query_knowledge_files` | Search internal knowledge base files using semantic/vector search. This should be your first choice for finding information before searching the web. | Always available. |
| `search_knowledge_files` | Search files across accessible knowledge bases by filename. | Always available. |
| `view_knowledge_file` | Get the full content of a file from a knowledge base. | Always available. |
| `query_knowledge_bases` | Search internal knowledge bases using semantic/vector search. Should be your first choice for finding information before searching the web. | Always available. |
| **Image Gen** | | |
| `generate_image` | Generates a new image based on a prompt (supports `steps`). | `ENABLE_IMAGE_GENERATION` enabled. |
| `edit_image` | Edits an existing image based on a prompt and URL. | `ENABLE_IMAGE_EDIT` enabled.|
@@ -227,7 +228,7 @@ Interleaved thinking requires models with strong reasoning capabilities. This fe
This is fundamentally different from a single-shot tool call. In an interleaved workflow, the model follows a cycle:
1. **Reason**: Analyze the user's intent and identify information gaps.
2. **Act**: Call a tool (e.g., `query_knowledge_bases` for internal docs or `search_web` and `fetch_url` for web research).
2. **Act**: Call a tool (e.g., `query_knowledge_files` for internal docs or `search_web` and `fetch_url` for web research).
3. **Think**: Read the tool's output and update its internal understanding.
4. **Iterate**: If the answer isn't clear, call another tool (e.g., `view_knowledge_file` to read a specific document or `fetch_url` to read a specific page) or refine the search.
5. **Finalize**: Only after completing this "Deep Research" cycle does the model provide a final, grounded answer.

View File

@@ -12,7 +12,7 @@ As new variables are introduced, this page will be updated to reflect the growin
:::info
This page is up-to-date with Open WebUI release version [v0.7](https://github.com/open-webui/open-webui/releases/tag/v0.7), but is still a work in progress to later include more accurate descriptions, listing out options available for environment variables, defaults, and improving descriptions.
This page is up-to-date with Open WebUI release version [v0.7.0](https://github.com/open-webui/open-webui/releases/tag/v0.7.0), but is still a work in progress to later include more accurate descriptions, listing out options available for environment variables, defaults, and improving descriptions.
:::
@@ -3642,7 +3642,6 @@ Note: If none of the specified languages are available and `en` was not in your
- Type: `bool`
- Default: `False`
- Description: Specifies whether to apply a Voice Activity Detection (VAD) filter to Whisper Speech-to-Text.
- Persistence: This environment variable is a `PersistentConfig` variable.
#### `WHISPER_MODEL_AUTO_UPDATE`
@@ -3656,6 +3655,12 @@ Note: If none of the specified languages are available and `en` was not in your
- Default: `None`
- Description: Specifies the ISO 639-1 language Whisper uses for STT (ISO 639-2 for Hawaiian and Cantonese). Whisper predicts the language by default.
#### `WHISPER_MULTILINGUAL`
- Type: `bool`
- Default: `False`
- Description: Toggles whether to use the multilingual Whisper model. When set to `False`, the system will use the English-only model for better performance in English-centric tasks. When `True`, it supports multiple languages.
### Speech-to-Text (OpenAI)
#### `AUDIO_STT_ENGINE`

View File

@@ -23,7 +23,7 @@ docker pull ghcr.io/open-webui/open-webui:main-slim
You can also pull a specific Open WebUI release version directly by using a versioned image tag. This is recommended for production environments to ensure stable and reproducible deployments.
```bash
docker pull ghcr.io/open-webui/open-webui:v0.7
docker pull ghcr.io/open-webui/open-webui:v0.7.0
```
## Step 2: Run the Container

View File

@@ -99,9 +99,9 @@ ghcr.io/open-webui/open-webui:<RELEASE_VERSION>-<TYPE>
Examples (pinned versions for illustration purposes only):
```
ghcr.io/open-webui/open-webui:v0.7
ghcr.io/open-webui/open-webui:v0.7-ollama
ghcr.io/open-webui/open-webui:v0.7-cuda
ghcr.io/open-webui/open-webui:v0.7.0
ghcr.io/open-webui/open-webui:v0.7.0-ollama
ghcr.io/open-webui/open-webui:v0.7.0-cuda
```
### Using the Dev Branch 🌙

View File

@@ -10,7 +10,7 @@ This tutorial is a community contribution and is not supported by the Open WebUI
:::
> [!WARNING]
> This documentation was created/updated based on version 0.7 and updated for recent migrations.
> This documentation was created/updated based on version 0.7.0 and updated for recent migrations.
## Open-WebUI Internal SQLite Database