This commit is contained in:
DrMelone
2026-02-12 18:35:00 +01:00
parent e2b99b33cd
commit b82465c86a
6 changed files with 14 additions and 14 deletions

View File

@@ -52,7 +52,7 @@ For Jupyter configuration, see the [Jupyter Notebook Integration](/tutorials/int
### Native Function Calling (Native Mode)
When using **Native function calling mode** with a capable model (e.g., GPT-5, Claude 4.5, MiniMax M2.1), the code interpreter is available as a builtin tool called `execute_code`. This provides a more integrated experience:
When using **Native function calling mode** with a capable model (e.g., GPT-5, Claude 4.5, MiniMax M2.5), the code interpreter is available as a builtin tool called `execute_code`. This provides a more integrated experience:
- **No XML tags required**: The model calls `execute_code(code)` directly
- **Same image handling**: Base64 image URLs in output are replaced with file URLs; model embeds via markdown

View File

@@ -44,7 +44,7 @@ Autonomous memory management works best with frontier models (GPT-5, Claude 4.5+
1. **Administrative Enablement**: Ensure the Memory feature is [enabled globally](#administrative-controls) by an administrator and that you have the required permissions.
2. **Native Mode (Agentic Mode)**: Enable **Native Function Calling** in the model's advanced parameters (**Admin Panel > Settings > Models > Model Specific Settings > Advanced Parameters**).
3. **Quality Models Required**: To unlock these features effectively, use frontier models with strong reasoning capabilities (e.g., GPT-5, Claude 4.5 Sonnet, Gemini 3 Flash, MiniMax M2.1) for the best experience. Small local models may not effectively manage memories autonomously.
3. **Quality Models Required**: To unlock these features effectively, use frontier models with strong reasoning capabilities (e.g., GPT-5, Claude 4.5 Sonnet, Gemini 3 Flash, MiniMax M2.5) for the best experience. Small local models may not effectively manage memories autonomously.
4. **Per-Model Category Toggle**: Ensure the **Memory** category is enabled for the model in **Workspace > Models > Edit > Builtin Tools** (enabled by default).
:::info Central Tool Documentation

View File

@@ -98,7 +98,7 @@ In Default Mode, Open WebUI manages tool selection by injecting a specific promp
Native Mode (also called **Agentic Mode**) leverages the model's built-in capability to handle tool definitions and return structured tool calls (JSON). This is the **recommended mode** for high-performance agentic workflows.
:::warning Model Quality Matters
**Agentic tool calling requires high-quality models to work reliably.** While small local models may technically support function calling, they often struggle with the complex reasoning required for multi-step tool usage. For best results, use frontier models like **GPT-5**, **Claude 4.5 Sonnet**, **Gemini 3 Flash**, or **MiniMax M2.1**. Small local models may produce malformed JSON or fail to follow the strict state management required for agentic behavior.
**Agentic tool calling requires high-quality models to work reliably.** While small local models may technically support function calling, they often struggle with the complex reasoning required for multi-step tool usage. For best results, use frontier models like **GPT-5**, **Claude 4.5 Sonnet**, **Gemini 3 Flash**, or **MiniMax M2.5**. Small local models may produce malformed JSON or fail to follow the strict state management required for agentic behavior.
:::
#### Why use Native Mode (Agentic Mode)?
@@ -129,7 +129,7 @@ For reliable agentic tool calling, use high-tier frontier models:
- **GPT-5** (OpenAI)
- **Claude 4.5 Sonnet** (Anthropic)
- **Gemini 3 Flash** (Google)
- **MiniMax M2.1**
- **MiniMax M2.5**
These models excel at multi-step reasoning, proper JSON formatting, and autonomous tool selection.
:::
@@ -172,7 +172,7 @@ These models excel at multi-step reasoning, proper JSON formatting, and autonomo
### Built-in System Tools (Native/Agentic Mode)
🛠️ When **Native Mode (Agentic Mode)** is enabled, Open WebUI automatically injects powerful system tools. This unlocks truly agentic behaviors where capable models (like GPT-5, Claude 4.5 Sonnet, Gemini 3 Flash, or MiniMax M2.1) can perform multi-step research, explore knowledge bases, or manage user memory autonomously.
🛠️ When **Native Mode (Agentic Mode)** is enabled, Open WebUI automatically injects powerful system tools. This unlocks truly agentic behaviors where capable models (like GPT-5, Claude 4.5 Sonnet, Gemini 3 Flash, or MiniMax M2.5) can perform multi-step research, explore knowledge bases, or manage user memory autonomously.
| Tool | Purpose |
|------|---------|

View File

@@ -8,7 +8,7 @@ title: "Agentic Search & URL Fetching"
Open WebUI's web search has evolved from simple result injection to a fully **agentic research system**. By enabling **Native Function Calling (Agentic Mode)**, you allow quality models to independently explore the web, verify facts, and follow links autonomously.
:::tip Quality Models Required
Agentic web search works best with frontier models like **GPT-5**, **Claude 4.5+**, **Gemini 3+**, or **MiniMax M2.1** that can reason about search results and decide when to dig deeper. Small local models may struggle with the multi-step reasoning required.
Agentic web search works best with frontier models like **GPT-5**, **Claude 4.5+**, **Gemini 3+**, or **MiniMax M2.5** that can reason about search results and decide when to dig deeper. Small local models may struggle with the multi-step reasoning required.
:::
:::info Central Tool Documentation
@@ -27,7 +27,7 @@ For comprehensive information about all built-in agentic tools (including web se
## How to Enable Agentic Behavior
To unlock these features, your model must support native tool calling and have strong reasoning capabilities (e.g., GPT-5, Claude 4.5 Sonnet, Gemini 3 Flash, MiniMax M2.1). Administrator-level configuration for these built-in system tools is handled via the [**Central Tool Calling Guide**](/features/plugin/tools#tool-calling-modes-default-vs-native).
To unlock these features, your model must support native tool calling and have strong reasoning capabilities (e.g., GPT-5, Claude 4.5 Sonnet, Gemini 3 Flash, MiniMax M2.5). Administrator-level configuration for these built-in system tools is handled via the [**Central Tool Calling Guide**](/features/plugin/tools#tool-calling-modes-default-vs-native).
1. **Enable Web Search Globally**: Ensure a search engine is configured in **Admin Panel → Settings → Web Search**.
2. **Enable Model Capability**: In **Admin Panel → Settings → Models**, select your model and enable the **Web Search** capability.

View File

@@ -83,7 +83,7 @@ Once Open WebUI is running:
:::
:::caution MiniMax Whitelisting
Some providers, like **MiniMax**, do not expose their models via a `/models` endpoint. For these providers, you **must** manually add the Model ID (e.g., `MiniMax-M2.1`) to the **Model IDs (Filter)** list for them to appear in the UI.
Some providers, like **MiniMax**, do not expose their models via a `/models` endpoint. For these providers, you **must** manually add the Model ID (e.g., `MiniMax-M2.5`) to the **Model IDs (Filter)** list for them to appear in the UI.
:::
* **Prefix ID**:

View File

@@ -1,6 +1,6 @@
---
sidebar_position: 32
title: "Integrate with MiniMax M2.1"
title: "Integrate with MiniMax M2.5"
---
:::warning
@@ -11,9 +11,9 @@ This tutorial is a community contribution and is not supported by the Open WebUI
---
# Integrating Open WebUI with MiniMax M2.1
# Integrating Open WebUI with MiniMax M2.5
MiniMax is a leading AI company providing high-performance coding-focused models. Their latest model, **MiniMax M2.1**, is specifically optimized for coding, reasoning, and multi-turn dialogue. This guide covers how to set up MiniMax via their cost-effective **Coding Plan** and integrate it into Open WebUI.
MiniMax is a leading AI company providing high-performance coding-focused models. Their latest model, **MiniMax M2.5**, is specifically optimized for coding, reasoning, and multi-turn dialogue. This guide covers how to set up MiniMax via their cost-effective **Coding Plan** and integrate it into Open WebUI.
## Step 1: Subscribe to a MiniMax Coding Plan
@@ -57,7 +57,7 @@ Now, link MiniMax to your Open WebUI instance.
- **API Base URL**: `https://api.minimax.io/v1`
- **API Key**: `YOUR_CODING_PLAN_API_KEY`
4. **Important**: MiniMax does not expose models via a `/models` endpoint, so you must whitelist the model manually.
5. In the **Model Whitelist**, type `MiniMax-M2.1` and click the **+** icon.
5. In the **Model Whitelist**, type `MiniMax-M2.5` and click the **+** icon.
6. Click **Verify Connection** (you should see a success alert).
7. Click **Save** on the connection popup, then scroll down and click **Save** on the main Connections page.
@@ -66,10 +66,10 @@ Now, link MiniMax to your Open WebUI instance.
## Step 4: Start Chatting
You are now ready to use MiniMax M2.1!
You are now ready to use MiniMax M2.5!
1. Start a new chat.
2. Select **MiniMax-M2.1** from the model dropdown menu.
2. Select **MiniMax-M2.5** from the model dropdown menu.
3. Send a message. Reasoning and thinking work by default without any additional configuration.
![MiniMax Chat interface](/images/tutorials/minimax/minimax-chat.png)