tooling descriptions

This commit is contained in:
DrMelone
2025-12-20 21:39:27 +01:00
parent 77acd2064f
commit f2fac4a0ff
2 changed files with 71 additions and 98 deletions

View File

@@ -5,140 +5,112 @@ title: "Tools"
# ⚙️ What are Tools?
Tools are small Python scripts that add superpowers to your LLM. When enabled, they allow your chatbot to do amazing things — like search the web, scrape data, generate images, talk back using AI voices, and more.
Tools are the various ways you can extend an LLM's capabilities beyond simple text generation. When enabled, they allow your chatbot to do amazing things — like search the web, scrape data, generate images, talk back using AI voices, and more.
Think of Tools as useful plugins that your AI can use when chatting with you.
Because there are several ways to integrate "Tools" in Open WebUI, it's important to understand which type you are using.
---
## 🚀 What Can Tools Help Me Do?
## 🧩 Tooling Taxonomy: Which "Tool" are you using?
Here are just a few examples of what Tools let your AI assistant do:
Users often encounter the term "Tools" in different contexts. Here is how to distinguish them:
- 🌍 Web Search: Get real-time answers by searching the internet.
- 🖼️ Image Generation: Create images from your prompts.
- 🔊 Voice Output: Generate AI voices using ElevenLabs.
| Type | Location in UI | Best For... | Source |
| :--- | :--- | :--- | :--- |
| **Native Features** | Admin/Settings | Core platform functionality | Built-in to Open WebUI |
| **Workspace Tools** | `Workspace > Tools` | User-created or community Python scripts | [Community Library](https://openwebui.com/search) |
| **Native MCP (HTTP)** | `Settings > Connections` | Standard MCP servers reachable via HTTP/SSE | External MCP Servers |
| **MCP via Proxy (MCPO)** | `Settings > Connections` | Local stdio-based MCP servers (e.g., Claude Desktop tools) | [MCPO Adapter](https://github.com/open-webui/mcpo) |
| **OpenAPI Servers** | `Settings > Connections` | Standard REST/OpenAPI web services | External Web APIs |
Explore ready-to-use tools in the 🧰 [Tools Showcase](https://openwebui.com/tools)
### 1. Native Features (Built-in)
These are deeply integrated into Open WebUI and generally don't require external scripts.
- **Web Search**: Integrated via engines like SearXNG, Google, or Tavily.
- **Image Generation**: Integrated with DALL-E, ComfyUI, or Automatic1111.
- **RAG (Knowledge)**: The ability to query uploaded documents (`#`).
### 2. Workspace Tools (Custom Plugins)
These are **Python scripts** that run directly within the Open WebUI environment.
- **Capability**: Can do anything Python can do (web scraping, complex math, API calls).
- **Access**: Managed via the `Workspace` menu.
- **Safety**: Always review code before importing, as these run on your server.
- **⚠️ Security Warning**: Normal or untrusted users should **not** be given permission to access the Workspace Tools section. This access allows a user to upload and execute arbitrary Python code on your server, which could lead to a full system compromise.
### 3. MCP (Model Context Protocol) 🔌
MCP is an open standard that allows LLMs to interact with external data and tools.
- **Native HTTP MCP**: Open WebUI can connect directly to any MCP server that exposes an HTTP/SSE endpoint.
- **MCPO (Proxy)**: Most community MCP servers use `stdio` (local command line). To use these in Open WebUI, you use the [**MCPO Proxy**](../../plugin/tools/openapi-servers/mcp.mdx) to bridge the connection.
### 4. OpenAPI / Function Calling Servers
Generic web servers that provide an OpenAPI (`.json` or `.yaml`) specification. Open WebUI can ingest these specs and treat every endpoint as a tool.
---
## 📦 How to Install Tools
## 📦 How to Install & Manage Workspace Tools
There are two easy ways to install Tools in Open WebUI:
Workspace Tools are the most common way to extend your instance with community features.
1. Go to [Community Tool Library](https://openwebui.com/tools)
2. Choose a Tool, then click the Get button.
3. Enter your Open WebUI instances IP address or URL.
4. Click Import to WebUI” — done!
:::warning
Safety Tip: Never import a Tool you dont recognize or trust. These are Python scripts and might run unsafe code.
1. Go to [Community Tool Library](https://openwebui.com/search)
2. Choose a Tool, then click the **Get** button.
3. Enter your Open WebUI instances URL (e.g. `http://localhost:3000`).
4. Click **Import to WebUI**.
:::warning Safety Tip
Never import a Tool you dont recognize or trust. These are Python scripts and might run unsafe code on your host system. **Crucially, ensure you only grant "Tool" permissions to trusted users**, as the ability to create or import tools is equivalent to the ability to run arbitrary code on the server.
:::
---
## 🔧 How to Use Tools in Open WebUI
## 🔧 How to Use Tools in Chat
Once you've installed Tools (well show you how below), heres how to enable and use them:
Once installed or connected, heres how to enable them for your conversations:
You have two ways to enable a Tool for your model:
### Option 1: Enable on-the-fly (Specific Chat)
While chatting, click the ** (plus)** icon in the input area. Youll see a list of available Tools — you can enable them specifically for that session.
### Option 1: Enable from the Chat Window
### Option 2: Enable by Default (Global/Model Level)
1. Go to **Workspace ➡️ Models**.
2. Choose the model youre using and click the ✏️ edit icon.
3. Scroll to the **Tools** section.
4. ✅ Check the Tools you want this model to always have access to by default.
5. Click **Save**.
While chatting, click the icon in the input area. Youll see a list of available Tools — you can enable any of them on the fly for that session.
:::tip
Tip: Enabling a Tool gives the model permission to use it — but it may not use it unless it's useful for the task.
:::
### ✏️ Option 2: Enable by Default (Recommended for Frequent Use)
1. Go to: Workspace ➡️ Models
2. Choose the model youre using (like GPT-4 or LLaMa2) and click the ✏️ edit icon.
3. Scroll down to the “Tools” section.
4. ✅ Check the Tools you want your model to have access to by default.
5. Click Save.
This ensures the model always has these Tools ready to use whenever you chat with it.
You can also let your LLM auto-select the right Tools using the AutoTool Filter:
🔗 [AutoTool Filter](https://openwebui.com/f/hub/autotool_filter/)
🎯 Note: Even when using AutoTool, you still need to enable your Tools using Option 2.
✅ And thats it — your LLM is now Tool-powered! You're ready to supercharge your chats with web search, image generation, voice output, and more.
You can also let your LLM auto-select the right Tools using the [**AutoTool Filter**](https://openwebui.com/f/hub/autotool_filter/).
---
## 🧠 Choosing How Tools Are Used: Default vs Native
Once Tools are enabled for your model, Open WebUI gives you two different ways to let your LLM use them in conversations.
You can decide how the model should call Tools by choosing between:
- 🟡 Default Mode (Prompt-based)
- 🟢 Native Mode (Built-in function calling)
Lets break it down:
### 🟡 Default Mode (Prompt-based Tool Triggering)
This is the default setting in Open WebUI.
Here, your LLM doesnt need to natively support function calling. Instead, we guide the model using smart tool selection prompt template to select and use a Tool.
✅ Works with almost any model
✅ Great way to unlock Tools with basic or local models
❗ Not as reliable or flexible as Native Mode when chaining tools
### 🟢 Native Mode (Function Calling Built-In)
If your model does support “native” function calling (like GPT-4o or GPT-3.5-turbo-1106), you can use this powerful mode to let the LLM decide — in real time — when and how to call multiple Tools during a single chat message.
✅ Fast, accurate, and can chain multiple Tools in one response
✅ The most natural and advanced experience
❗ Requires a model that actually supports native function calling
### ✳️ How to Switch Between Modes
Want to enable native function calling in your chats? Here's how:
Once Tools are enabled, Open WebUI gives you two different ways to let your LLM interact with them. You can switch this via the chat settings:
![Chat Controls](/images/features/plugin/tools/chat-controls.png)
1. Open the chat window with your model.
2. Click ⚙️Chat Controls > Advanced Params.
3. Look for the Function Calling setting and switch it from Default → Native
1. Open a chat with your model.
2. Click ⚙️ **Chat Controls > Advanced Params**.
3. Look for the **Function Calling** setting and switch between:
Thats it! Your chat is now using true native Tool support (as long as the model supports it).
### 🟡 Default Mode (Prompt-based)
Here, your LLM doesnt need to natively support function calling. We guide the model using a smart tool-selection prompt template to select and use a Tool.
- ✅ Works with **practically any model** (including smaller local models).
- 💡 **Admin Note**: You can also toggle the default mode for each specific model in the **Admin Panel > Settings > Models > Advanced Parameters**.
➡️ We recommend using GPT-4o or another OpenAI model for the best native function-calling experience.
🔎 Some local models may claim support, but often struggle with accurate or complex Tool usage.
- ❗ Not as reliable as Native Mode when chaining multiple complex tools.
💡 Summary:
### 🟢 Native Mode (Built-in Function Calling)
If your model supports native function calling (like GPT-4o, Gemini, Claude, or GPT-5), use this for a faster, more accurate experience where the LLM decides exactly when and how to call tools.
- ✅ Fast, accurate, and can chain multiple tools in one response.
- ❗ Requires a model that explicitly supports tool-calling schemas.
| Mode | Who its for | Pros | Cons |
|----------|----------------------------------|-----------------------------------------|--------------------------------------|
| Default | Any model | Broad compatibility, safer, flexible | May be less accurate or slower |
| Native | GPT-4o, etc. | Fast, smart, excellent tool chaining | Needs proper function call support |
Choose the one that works best for your setup — and remember, you can always switch on the fly via Chat Controls.
👏 And that's it — your LLM now knows how and when to use Tools, intelligently.
| **Default** | Practically any model (basic/local) | Broad compatibility, safer, flexible | May be less accurate or slower |
| **Native** | GPT-4o, Gemini, Claude, GPT-5, etc. | Fast, smart, excellent tool chaining | Needs proper function call support |
---
## 🧠 Summary
## 🚀 Summary & Next Steps
Tools are add-ons that help your AI model do much more than just chat. From answering real-time questions to generating images or speaking out loud — Tools bring your AI to life.
- Visit: [https://openwebui.com/tools](https://openwebui.com/tools) to discover new Tools.
- Install them manually or with one-click.
- Enable them per model from Workspace ➡️ Models.
- Use them in chat by clicking
Now go make your AI waaaaay smarter 🤖✨
Tools bring your AI to life by giving it hands to interact with the world.
- **Browse Tools**: [openwebui.com/search](https://openwebui.com/search)
- **Advanced Setup**: Learn more about [MCP Support](./openapi-servers/mcp.mdx)
- **Development**: [Writing your own Custom Toolkits](./development.mdx)

View File

@@ -54,6 +54,7 @@ const config: Config = {
// blog: false,
blog: {
showReadingTime: true,
onUntruncatedBlogPosts: "ignore",
// Please change this to your repo.
// Remove this to remove the "edit this page" links.
// editUrl: