From b3ff32739067245b8b2f4868e3af26b0d3ccb29c Mon Sep 17 00:00:00 2001 From: Classic Date: Fri, 9 Jan 2026 20:26:37 +0100 Subject: [PATCH] docs: update versioning to v0.7.0 and document new features - Standardize versioning from 0.7 to 0.7.0 across all guides and Docker tags - Update Knowledge Base tool documentation with new query_knowledge_files and query_knowledge_bases (metadata search) tools - Add documentation for the new Model Activity Chart in Evaluations - Add WHISPER_MULTILINGUAL env var and update WHISPER_VAD_FILTER configuration details --- docs/features/audio/speech-to-text/env-variables.md | 1 + docs/features/audio/speech-to-text/stt-config.md | 1 + docs/features/evaluation/index.mdx | 12 +++++++++++- docs/features/image-generation-and-editing/usage.md | 2 +- docs/features/plugin/tools/index.mdx | 5 +++-- docs/getting-started/env-configuration.mdx | 9 +++++++-- .../quick-start/tab-docker/ManualDocker.md | 2 +- docs/intro.mdx | 6 +++--- docs/tutorials/tips/sqlite-database.md | 2 +- 9 files changed, 29 insertions(+), 11 deletions(-) diff --git a/docs/features/audio/speech-to-text/env-variables.md b/docs/features/audio/speech-to-text/env-variables.md index 0f002057..ee16d5b1 100644 --- a/docs/features/audio/speech-to-text/env-variables.md +++ b/docs/features/audio/speech-to-text/env-variables.md @@ -27,6 +27,7 @@ Most of these settings can also be configured in the **Admin Panel β†’ Settings | `WHISPER_MODEL_DIR` | Directory to store Whisper model files | `{CACHE_DIR}/whisper/models` | | `WHISPER_COMPUTE_TYPE` | Compute type for inference (see note below) | `int8` | | `WHISPER_LANGUAGE` | ISO 639-1 language code (empty = auto-detect) | empty | +| `WHISPER_MULTILINGUAL` | Use the multilingual Whisper model | `false` | | `WHISPER_MODEL_AUTO_UPDATE` | Auto-download model updates | `false` | | `WHISPER_VAD_FILTER` | Enable Voice Activity Detection filter | `false` | diff --git a/docs/features/audio/speech-to-text/stt-config.md b/docs/features/audio/speech-to-text/stt-config.md index a7991e4d..11117a36 100644 --- a/docs/features/audio/speech-to-text/stt-config.md +++ b/docs/features/audio/speech-to-text/stt-config.md @@ -93,6 +93,7 @@ For smaller models like Whisper, CPU mode often provides comparable performance #### Poor Recognition Accuracy - **Set the language explicitly** using `WHISPER_LANGUAGE=en` (uses ISO 639-1 codes) +- **Toggles multilingual support** β€” Use `WHISPER_MULTILINGUAL=true` if you need to support languages other than English. When disabled (default), only the English-only variant of the model is used for better performance in English tasks. - **Use a larger Whisper model** β€” options: `tiny`, `base`, `small`, `medium`, `large` - Larger models are more accurate but slower diff --git a/docs/features/evaluation/index.mdx b/docs/features/evaluation/index.mdx index 8f1a3e44..eaca2b71 100644 --- a/docs/features/evaluation/index.mdx +++ b/docs/features/evaluation/index.mdx @@ -86,7 +86,17 @@ This is a sample leaderboard layout: ![Leaderboard Example](/images/evaluation/leaderboard.png) -### Topic-Based Reranking +### Model Activity Tracking + +In addition to overall Elo ratings, you can now view a model's performance history through the **Model Activity Chart**. This feature provides a chronological view of how a model's evaluation has evolved over time. + +- **Diverging Chart**: The chart displays wins (positive) and losses (negative) daily or weekly, giving you a clear visual indicator of the model's reliability over time. +- **Time Ranges**: You can toggle between different time horizons: **30 Days**, **1 Year**, or **All Time**. +- **Weekly Aggregation**: For longer time ranges (1Y and All), the data is automatically aggregated by week to provide a smoother, more readable trend. + +To view the activity chart, click on a model in the Leaderboard to open its detailed evaluation modal. + +![Model Activity Chart](/images/evaluation/activity-chart.png) When you rate chats, you can **tag them by topic** for more granular insights. This is especially useful if you’re working in different domains like **customer service, creative writing, technical support**, etc. diff --git a/docs/features/image-generation-and-editing/usage.md b/docs/features/image-generation-and-editing/usage.md index cf80ded8..4a132db1 100644 --- a/docs/features/image-generation-and-editing/usage.md +++ b/docs/features/image-generation-and-editing/usage.md @@ -37,7 +37,7 @@ You can also edit the LLM's response and enter your image generation prompt as t :::info **Legacy "Generate Image" Button:** -As of Open WebUI v0.7, the native "Generate Image" button (which allowed generating an image directly from a message's content) was removed. If you wish to restore this functionality, you can use the community-built **[Generate Image Action](https://openwebui.com/posts/3fadc3ca-c955-4c9e-9582-7438f0911b62)**. +As of Open WebUI v0.7.0, the native "Generate Image" button (which allowed generating an image directly from a message's content) was removed. If you wish to restore this functionality, you can use the community-built **[Generate Image Action](https://openwebui.com/posts/3fadc3ca-c955-4c9e-9582-7438f0911b62)**. ::: ## Restoring the "Generate Image" Button diff --git a/docs/features/plugin/tools/index.mdx b/docs/features/plugin/tools/index.mdx index e8633548..fd8f7d1e 100644 --- a/docs/features/plugin/tools/index.mdx +++ b/docs/features/plugin/tools/index.mdx @@ -155,10 +155,11 @@ These models excel at multi-step reasoning, proper JSON formatting, and autonomo | `fetch_url` | Visits a URL and extracts text content via the Web Loader. | Part of Web Search feature. | | **Knowledge Base** | | | | `list_knowledge_bases` | List the user's accessible knowledge bases with file counts. | Always available. | +| `query_knowledge_bases` | Search knowledge bases by semantic similarity to query. Finds KBs whose name/description match the meaning of your query. Use this to discover relevant knowledge bases before querying their files. | Always available. | | `search_knowledge_bases` | Search knowledge bases by name and description. | Always available. | +| `query_knowledge_files` | Search internal knowledge base files using semantic/vector search. This should be your first choice for finding information before searching the web. | Always available. | | `search_knowledge_files` | Search files across accessible knowledge bases by filename. | Always available. | | `view_knowledge_file` | Get the full content of a file from a knowledge base. | Always available. | -| `query_knowledge_bases` | Search internal knowledge bases using semantic/vector search. Should be your first choice for finding information before searching the web. | Always available. | | **Image Gen** | | | | `generate_image` | Generates a new image based on a prompt (supports `steps`). | `ENABLE_IMAGE_GENERATION` enabled. | | `edit_image` | Edits an existing image based on a prompt and URL. | `ENABLE_IMAGE_EDIT` enabled.| @@ -227,7 +228,7 @@ Interleaved thinking requires models with strong reasoning capabilities. This fe This is fundamentally different from a single-shot tool call. In an interleaved workflow, the model follows a cycle: 1. **Reason**: Analyze the user's intent and identify information gaps. -2. **Act**: Call a tool (e.g., `query_knowledge_bases` for internal docs or `search_web` and `fetch_url` for web research). +2. **Act**: Call a tool (e.g., `query_knowledge_files` for internal docs or `search_web` and `fetch_url` for web research). 3. **Think**: Read the tool's output and update its internal understanding. 4. **Iterate**: If the answer isn't clear, call another tool (e.g., `view_knowledge_file` to read a specific document or `fetch_url` to read a specific page) or refine the search. 5. **Finalize**: Only after completing this "Deep Research" cycle does the model provide a final, grounded answer. diff --git a/docs/getting-started/env-configuration.mdx b/docs/getting-started/env-configuration.mdx index 0c1171fb..ca09ad3a 100644 --- a/docs/getting-started/env-configuration.mdx +++ b/docs/getting-started/env-configuration.mdx @@ -12,7 +12,7 @@ As new variables are introduced, this page will be updated to reflect the growin :::info -This page is up-to-date with Open WebUI release version [v0.7](https://github.com/open-webui/open-webui/releases/tag/v0.7), but is still a work in progress to later include more accurate descriptions, listing out options available for environment variables, defaults, and improving descriptions. +This page is up-to-date with Open WebUI release version [v0.7.0](https://github.com/open-webui/open-webui/releases/tag/v0.7.0), but is still a work in progress to later include more accurate descriptions, listing out options available for environment variables, defaults, and improving descriptions. ::: @@ -3642,7 +3642,6 @@ Note: If none of the specified languages are available and `en` was not in your - Type: `bool` - Default: `False` - Description: Specifies whether to apply a Voice Activity Detection (VAD) filter to Whisper Speech-to-Text. -- Persistence: This environment variable is a `PersistentConfig` variable. #### `WHISPER_MODEL_AUTO_UPDATE` @@ -3656,6 +3655,12 @@ Note: If none of the specified languages are available and `en` was not in your - Default: `None` - Description: Specifies the ISO 639-1 language Whisper uses for STT (ISO 639-2 for Hawaiian and Cantonese). Whisper predicts the language by default. +#### `WHISPER_MULTILINGUAL` + +- Type: `bool` +- Default: `False` +- Description: Toggles whether to use the multilingual Whisper model. When set to `False`, the system will use the English-only model for better performance in English-centric tasks. When `True`, it supports multiple languages. + ### Speech-to-Text (OpenAI) #### `AUDIO_STT_ENGINE` diff --git a/docs/getting-started/quick-start/tab-docker/ManualDocker.md b/docs/getting-started/quick-start/tab-docker/ManualDocker.md index 2bfe57ad..6d5c8159 100644 --- a/docs/getting-started/quick-start/tab-docker/ManualDocker.md +++ b/docs/getting-started/quick-start/tab-docker/ManualDocker.md @@ -23,7 +23,7 @@ docker pull ghcr.io/open-webui/open-webui:main-slim You can also pull a specific Open WebUI release version directly by using a versioned image tag. This is recommended for production environments to ensure stable and reproducible deployments. ```bash -docker pull ghcr.io/open-webui/open-webui:v0.7 +docker pull ghcr.io/open-webui/open-webui:v0.7.0 ``` ## Step 2: Run the Container diff --git a/docs/intro.mdx b/docs/intro.mdx index 496ec1fd..c9b70232 100644 --- a/docs/intro.mdx +++ b/docs/intro.mdx @@ -99,9 +99,9 @@ ghcr.io/open-webui/open-webui:- Examples (pinned versions for illustration purposes only): ``` -ghcr.io/open-webui/open-webui:v0.7 -ghcr.io/open-webui/open-webui:v0.7-ollama -ghcr.io/open-webui/open-webui:v0.7-cuda +ghcr.io/open-webui/open-webui:v0.7.0 +ghcr.io/open-webui/open-webui:v0.7.0-ollama +ghcr.io/open-webui/open-webui:v0.7.0-cuda ``` ### Using the Dev Branch πŸŒ™ diff --git a/docs/tutorials/tips/sqlite-database.md b/docs/tutorials/tips/sqlite-database.md index 38429c8b..13109595 100644 --- a/docs/tutorials/tips/sqlite-database.md +++ b/docs/tutorials/tips/sqlite-database.md @@ -10,7 +10,7 @@ This tutorial is a community contribution and is not supported by the Open WebUI ::: > [!WARNING] -> This documentation was created/updated based on version 0.7 and updated for recent migrations. +> This documentation was created/updated based on version 0.7.0 and updated for recent migrations. ## Open-WebUI Internal SQLite Database