refac: reorg

This commit is contained in:
Timothy Jaeryang Baek
2025-11-13 18:49:50 -05:00
parent 365e40a9d5
commit 36bd296693
17 changed files with 20 additions and 13 deletions

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 400
sidebar_position: 200
title: "⭐ Features"
---

View File

@@ -1,6 +1,6 @@
---
sidebar_position: 7
title: "Pipeline Tutorials"
title: "Tutorials"
---
## Pipeline Tutorials

View File

@@ -5,7 +5,7 @@ title: "Valves"
## Valves
`Valves` (see the dedicated [Valves & UserValves](/features/plugin/valves) page) can also be set for `Pipeline`. In short, `Valves` are input variables that are set per pipeline.
`Valves` (see the dedicated [Valves & UserValves](/features/plugin/development/valves) page) can also be set for `Pipeline`. In short, `Valves` are input variables that are set per pipeline.
`Valves` are set as a subclass of the `Pipeline` class, and initialized as part of the `__init__` method of the `Pipeline` class.

View File

@@ -0,0 +1,7 @@
{
"label": "Development",
"position": 800,
"link": {
"type": "generated-index"
}
}

View File

@@ -92,7 +92,7 @@ Below is a comprehensive table of **all supported `type` values** for events, al
| `chat:message:files`,<br/>`files` | Set or overwrite message files (for uploads, output) | `{files: [...]}` |
| `chat:title` | Set (or update) the chat conversation title | Topic string OR `{title: ...}` |
| `chat:tags` | Update the set of tags for a chat | Tag array or object |
| `source`,<br/>`citation` | Add a source/citation, or code execution result | For code: See [below.](/docs/features/plugin/events/index.mdx#source-or-citation-and-code-execution) |
| `source`,<br/>`citation` | Add a source/citation, or code execution result | For code: See [below.](/features/plugin/development/events#source-or-citation-and-code-execution) |
| `notification` | Show a notification ("toast") in the UI | `{type: "info" or "success" or "error" or "warning", content: "..."}` |
| `confirmation` <br/>(needs `__event_call__`) | Ask for confirmation (OK/Cancel dialog) | `{title: "...", message: "..."}` |
| `input` <br/>(needs `__event_call__`) | Request simple user input ("input box" dialog) | `{title: "...", message: "...", placeholder: "...", value: ...}` |

View File

@@ -1,6 +1,6 @@
---
sidebar_position: 20
title: "Special Arguments"
sidebar_position: 999
title: "Reserved Arguments"
---
:::warning

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 4
sidebar_position: 9999
title: "Migrating Tools & Functions: 0.4 to 0.5"
---

View File

@@ -52,7 +52,7 @@ Each tool must have type hints for arguments. The types may also be nested, such
### Valves and UserValves - (optional, but HIGHLY encouraged)
Valves and UserValves are used for specifying customizable settings of the Tool, you can read more on the dedicated [Valves & UserValves page](/features/plugin/valves/index.mdx).
Valves and UserValves are used for specifying customizable settings of the Tool, you can read more on the dedicated [Valves & UserValves page](/features/plugin/development/valves).
### Optional Arguments
Below is a list of optional arguments your tools can depend on:

View File

@@ -0,0 +1,176 @@
---
sidebar_position: 10
title: "FAQ"
---
#### 🌐 Q: Why isn't my local OpenAPI tool server accessible from the WebUI interface?
**A:** If your tool server is running locally (e.g., http://localhost:8000), browser-based clients may be restricted from accessing it due to CORS (Cross-Origin Resource Sharing) policies.
Make sure to explicitly enable CORS headers in your OpenAPI server. For example, if you're using FastAPI, you can add:
```python
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # or specify your client origin
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
```
Also, if Open WebUI is served over HTTPS (e.g., https://yourdomain.com), your local server must meet one of the following conditions:
- Be accessed from the same domain using HTTPS (e.g., https://localhost:8000).
- OR run on localhost (127.0.0.1) to allow browsers to relax security for local development.
- Otherwise, browsers may block insecure requests from HTTPS pages to HTTP APIs due to mixed-content rules.
To work securely in production over HTTPS, your OpenAPI servers must also be served over HTTPS.
---
#### 🚀 Q: Do I need to use FastAPI for my server implementation?
**A:** No! While our reference implementations are written using FastAPI for clarity and ease of use, you can use any framework or language that produces a valid OpenAPI (Swagger) specification. Some common choices include:
- FastAPI (Python)
- Flask + Flask-RESTX (Python)
- Express + Swagger UI (JavaScript/Node)
- Spring Boot (Java)
- Go with Swag or Echo
The key is to ensure your server exposes a valid OpenAPI schema, and that it communicates over HTTP(S).
It is important to set a custom operationId for all endpoints.
---
#### 🚀 Q: Why choose OpenAPI over MCP?
**A:** OpenAPI wins over MCP in most real-world scenarios due to its simplicity, tooling ecosystem, stability, and developer-friendliness. Here's why:
- ✅ **Reuse Your Existing Code**: If youve built REST APIs before, you're mostly done—you dont need to rewrite your logic. Just define a compliant OpenAPI spec and expose your current code as a tool server.
With MCP, you had to reimplement your tool logic inside a custom protocol layer, duplicating work and increasing the surface area to maintain.
- 💼 **Less to Maintain & Debug**: OpenAPI fits naturally into modern dev workflows. You can test endpoints with Postman, inspect logs with built-in APIs, troubleshoot easily with mature ecosystem tools—and often without modifying your core app at all.
MCP introduced new layers of transport, schema parsing, and runtime quirks, all of which had to be debugged manually.
- 🌍 **Standards-Based**: OpenAPI is widely adopted across the tech industry. Its well-defined structure means tools, agents, and servers can interoperate immediately, without needing special bridges or translations.
- 🧰 **Better Tooling**: Theres an entire universe of tools that support OpenAPI—automatic client/server generation, documentation, validation, mocking, testing, and even security audit tools.
- 🔐 **First-Class Security Support**: OpenAPI includes native support for things like OAuth2, JWTs, API Keys, and HTTPS—making it easier to build secure endpoints with common libraries and standards.
- 🧠 **More Devs Already Know It**: Using OpenAPI means you're speaking a language already familiar to backend teams, frontend developers, DevOps, and product engineers. Theres no learning curve or costly onboarding required.
- 🔄 **Future-Proof & Extensible**: OpenAPI evolves with API standards and remains forward-compatible. MCP, by contrast, was bespoke and experimental—often requiring changes as the surrounding ecosystem changed.
🧵 Bottom line: OpenAPI lets you do more with less effort, less code duplication, and fewer surprises. Its a production-ready, developer-friendly route to powering LLM tools—without rebuilding everything from scratch.
---
#### 🔐 Q: How do I secure my OpenAPI tool server?
**A:** OpenAPI supports industry-standard security mechanisms like:
- OAuth 2.0
- API Key headers
- JWT (JSON Web Token)
- Basic Auth
Use HTTPS in production to encrypt data in transit, and restrict endpoints with proper auth/authz methods as appropriate. You can incorporate these directly in your OpenAPI schema using the securitySchemes field.
---
#### ❓ Q: What kind of tools can I build using OpenAPI tool servers?
**A:** If it can be exposed via a REST API, you can build it. Common tool types include:
- Filesystem operations (read/write files, list directories)
- Git and document repository access
- Database querying or schema exploration
- Web scrapers or summarizers
- External SaaS integrations (e.g., Salesforce, Jira, Slack)
- LLM-attached memory stores / RAG components
- Secure internal microservices exposed to your agent
---
#### 🔌 Q: Can I run more than one tool server at the same time?
**A:** Absolutely. Each tool server runs independently and exposes its own OpenAPI schema. Your agent configuration can point to multiple tool servers, allowing you to mix and match based on need.
There's no limit—just ensure each server runs on its own port or address and is reachable by the agent host.
---
#### 🧪 Q: How do I test a tool server before linking it to an LLM agent?
**A:** You can test your OpenAPI tool servers using:
- Swagger UI or ReDoc (built into FastAPI by default)
- Postman or Insomnia
- curl or httpie from the command line
- Pythons requests module
- OpenAPI validators and mockers
Once validated, you can register the tool server with an LLM agent or through Open WebUI.
---
#### 🛠️ Q: Can I extend or customize the reference servers?
**A:** Yes! All servers in the servers/ directory are built to be simple templates. Fork and modify them to:
- Add new endpoints and business logic
- Integrate authentication
- Change response formats
- Connect to new services or internal APIs
- Deploy via Docker, Kubernetes, or any cloud host
---
#### 🌍 Q: Can I run OpenAPI tool servers on cloud platforms like AWS or GCP?
**A:** Yes. These servers are plain HTTP services. You can deploy them as:
- AWS Lambda with API Gateway (serverless)
- EC2 or GCP Compute Engine instances
- Kubernetes services in GKE/EKS/AKS
- Cloud Run or App Engine
- Render, Railway, Heroku, etc.
Just make sure theyre securely configured and publicly reachable (or VPN'd) if needed by the agent or user.
---
#### 🧪 Q: What if I have an existing MCP server?
**A:** Great news! You can use our MCP-to-OpenAPI Bridge: [mcpo](https://github.com/open-webui/mcpo), exposing your existing MCP-based tools as OpenAPI-compatible APIs is now easier than ever. No rewrites, no headaches — just plug and go! 🚀
If you've already built tools using the MCP protocol, `mcpo` helps you instantly unlock compatibility with Open WebUI and any OpenAPI-based agent — ensuring your hard work remains fully accessible and future-ready.
[Check out the optional Bridge to MCP section in the docs for setup instructions.](https://github.com/open-webui/openapi-servers?tab=readme-ov-file#-bridge-to-mcp-optional)
**Quick Start:**
```bash
uvx mcpo --port 8000 -- uvx mcp-server-time --local-timezone=America/New_York
```
✨ Thats it — your MCP server is now OpenAPI-ready!
---
#### 🗂️ Q: Can one OpenAPI server implement multiple tools?
**A:** Yes. A single OpenAPI server can offer multiple related capabilities grouped under different endpoints. For example, a document server may provide search, upload, OCR, and summarization—all within one schema.
You can also modularize completely by creating one OpenAPI server per tool if you prefer isolation and flexibility.
---
🙋 Have more questions? Visit the GitHub discussions for help and feedback from the community:
👉 [Community Discussions](https://github.com/open-webui/openapi-servers/discussions)

View File

@@ -0,0 +1,70 @@
---
sidebar_position: 400
title: "OpenAPI Tool Servers"
---
import { TopBanners } from "@site/src/components/TopBanners";
<TopBanners />
# 🌟 OpenAPI Tool Servers
This repository provides reference OpenAPI Tool Server implementations making it easy and secure for developers to integrate external tooling and data sources into LLM agents and workflows. Designed for maximum ease of use and minimal learning curve, these implementations utilize the widely adopted and battle-tested [OpenAPI specification](https://www.openapis.org/) as the standard protocol.
By leveraging OpenAPI, we eliminate the need for a proprietary or unfamiliar communication protocol, ensuring you can quickly and confidently build or integrate servers. This means less time spent figuring out custom interfaces and more time building powerful tools that enhance your AI applications.
## ☝️ Why OpenAPI?
- **Established Standard**: OpenAPI is a widely used, production-proven API standard backed by thousands of tools, companies, and communities.
- **No Reinventing the Wheel**: No additional documentation or proprietary spec confusion. If you build REST APIs or use OpenAPI today, you're already set.
- **Easy Integration & Hosting**: Deploy your tool servers externally or locally without vendor lock-in or complex configurations.
- **Strong Security Focus**: Built around HTTP/REST APIs, OpenAPI inherently supports widely used, secure communication methods including HTTPS and well-proven authentication standards (OAuth, JWT, API Keys).
- **Future-Friendly & Stable**: Unlike less mature or experimental protocols, OpenAPI promises reliability, stability, and long-term community support.
## 🚀 Quickstart
Get started quickly with our reference FastAPI-based implementations provided in the `servers/` directory. (You can adapt these examples into your preferred stack as needed, such as using [FastAPI](https://fastapi.tiangolo.com/), [FastOpenAPI](https://github.com/mr-fatalyst/fastopenapi) or any other OpenAPI-compatible library):
```bash
git clone https://github.com/open-webui/openapi-servers
cd openapi-servers
```
### With Bash
```bash
# Example: Installing dependencies for a specific server 'filesystem'
cd servers/filesystem
pip install -r requirements.txt
uvicorn main:app --host 0.0.0.0 --reload
```
The filesystem server should be reachable from: [http://localhost:8000](http://localhost:8000)
The documentation path will be: [http://localhost:8000](http://localhost:8000)
### With Docker
If you have docker compose installed, bring the servers up with:
```bash
docker compose up
```
The services will be reachable from:
* [Filesystem localhost:8081](http://localhost:8081)
* [memory server localhost:8082](http://localhost:8082)
* [time-server localhost:8083](http://localhost:8083)
Now, simply point your OpenAPI-compatible clients or AI agents to your local or publicly deployed URL—no configuration headaches, no complicated transports.
## 🌱 Open WebUI Community
- For general discussions, technical exchange, and announcements, visit our [Community Discussions](https://github.com/open-webui/openapi-servers/discussions) page.
- Have ideas or feedback? Please open an issue!

View File

@@ -0,0 +1,199 @@
---
sidebar_position: 3
title: "MCP Support"
---
This documentation explains how to easily set up and deploy the [**MCP (Model Context Protocol)-to-OpenAPI proxy server** (mcpo)](https://github.com/open-webui/mcpo) provided by Open WebUI. Learn how you can effortlessly expose MCP-based tool servers using standard, familiar OpenAPI endpoints suitable for end-users and developers.
### 📌 What is the MCP Proxy Server?
The MCP-to-OpenAPI proxy server lets you use tool servers implemented with MCP (Model Context Protocol) directly via standard REST/OpenAPI APIs—no need to manage unfamiliar or complicated custom protocols. If you're an end-user or application developer, this means you can interact easily with powerful MCP-based tooling directly through familiar REST-like endpoints.
### 💡 Why Use mcpo?
While MCP tool servers are powerful and flexible, they commonly communicate via standard input/output (stdio)—often running on your local machine where they can easily access your filesystem, environment, and other native system capabilities.
Thats a strength—but also a limitation.
If you want to deploy your main interface (like Open WebUI) on the cloud, you quickly run into a problem: your cloud instance cant speak directly to an MCP server running locally on your machine via stdio.
[Thats where mcpo comes in with a game-changing solution.](https://github.com/open-webui/mcpo)
MCP servers typically rely on raw stdio communication, which is:
- 🔓 Inherently insecure across environments
- ❌ Incompatible with most modern tools, UIs, or platforms
- 🧩 Lacking critical features like authentication, documentation, and error handling
The mcpo proxy eliminates those issues—automatically:
- ✅ Instantly compatible with existing OpenAPI tools, SDKs, and clients
- 🛡 Wraps your tools with secure, scalable, and standards-based HTTP endpoints
- 🧠 Auto-generates interactive OpenAPI documentation for every tool, entirely config-free
- 🔌 Uses plain HTTP—no socket setup, daemon juggling, or platform-specific glue code
So even though adding mcpo might at first seem like "just one more layer"—in reality, it simplifies everything while giving you:
- Better integration ✅
- Better security ✅
- Better scalability ✅
- Happier developers & users ✅
✨ With mcpo, your local-only AI tools become cloud-ready, UI-friendly, and instantly interoperable—without changing a single line of tool server code.
### ✅ Quickstart: Running the Proxy Locally
Here's how simple it is to launch the MCP-to-OpenAPI proxy server using the lightweight, easy-to-use tool **mcpo** ([GitHub Repository](https://github.com/open-webui/mcpo)):
1. **Prerequisites**
- **Python 3.8+** with `pip` installed.
- MCP-compatible application (for example: `mcp-server-time`)
- (Optional but recommended) `uv` installed for faster startup and zero-config convenience.
2. **Install mcpo**
Using **uv** (recommended):
```bash
uvx mcpo --port 8000 -- your_mcp_server_command
```
Or using `pip`:
```bash
pip install mcpo
mcpo --port 8000 -- your_mcp_server_command
```
3. 🚀 **Run the Proxy Server**
To start your MCP-to-OpenAPI proxy server, you need an MCP-compatible tool server. If you don't have one yet, the MCP community provides various ready-to-use MCP server implementations.
✨ **Where to find MCP Servers?**
You can discover officially supported MCP servers at the following repository example:
- [modelcontextprotocol/servers on GitHub](https://github.com/modelcontextprotocol/servers)
For instance, the popular **Time MCP Server** is documented [here](https://github.com/modelcontextprotocol/servers/blob/main/src/time/README.md), and is typically referenced clearly in the README, inside the provided MCP configuration. Specifically, the README states:
> Add to your Claude settings:
>
> ```json
> "mcpServers": {
> "time": {
> "command": "uvx",
> "args": ["mcp-server-time", "--local-timezone=America/New_York"]
> }
> }
> ```
🔑 **Translating this MCP setup to a quick local proxy command**:
You can easily run the recommended MCP server (`mcp-server-time`) directly through the **MCP-to-OpenAPI proxy** (`mcpo`) like this:
```bash
uvx mcpo --port 8000 -- uvx mcp-server-time --local-timezone=America/New_York
```
That's it! You're now running the MCP-to-OpenAPI Proxy locally and exposing the powerful **MCP Time Server** through standard OpenAPI endpoints accessible at:
- 📖 **Interactive OpenAPI Documentation:** [`http://localhost:8000/docs`](http://localhost:8000/docs)
Feel free to replace `uvx mcp-server-time --local-timezone=America/New_York` with your preferred MCP Server command from other available MCP implementations found in the official repository.
🤝 **To integrate with Open WebUI after launching the server, check our [docs](https://docs.openwebui.com/openapi-servers/open-webui/).**
### 🚀 Accessing the Generated APIs
As soon as it starts, the MCP Proxy (`mcpo`) automatically:
- Discovers MCP tools dynamically and generates REST endpoints.
- Creates interactive, human-readable OpenAPI documentation accessible at:
- `http://localhost:8000/docs`
Simply call the auto-generated API endpoints directly via HTTP clients, AI agents, or other OpenAPI tools of your preference.
### 📖 Example Workflow for End-Users
Assuming you started the above server command (`uvx mcp-server-time`):
- Visit your local API documentation at `http://localhost:8000/docs`.
- Select a generated endpoint (e.g., `/get_current_time`) and use the provided interactive form.
- Click "**Execute**" and instantly receive your response.
No setup complexity—just instant REST APIs.
## 🚀 Deploying in Production (Example)
Deploying your MCP-to-OpenAPI proxy (powered by mcpo) is straightforward. Here's how to easily Dockerize and deploy it to cloud or VPS solutions:
### 🐳 Dockerize your Proxy Server using mcpo
1. **Dockerfile Example**
Create the following `Dockerfile` inside your deployment directory:
```dockerfile
FROM python:3.11-slim
WORKDIR /app
RUN pip install mcpo uv
# Replace with your MCP server command; example: uvx mcp-server-time
CMD ["uvx", "mcpo", "--host", "0.0.0.0", "--port", "8000", "--", "uvx", "mcp-server-time", "--local-timezone=America/New_York"]
```
2. **Build & Run the Container Locally**
```bash
docker build -t mcp-proxy-server .
docker run -d -p 8000:8000 mcp-proxy-server
```
3. **Deploying Your Container**
Push to DockerHub or another registry:
```bash
docker tag mcp-proxy-server yourdockerusername/mcp-proxy-server:latest
docker push yourdockerusername/mcp-proxy-server:latest
```
Deploy using Docker Compose, Kubernetes YAML manifests, or your favorite cloud container services (AWS ECS, Azure Container Instances, Render.com, or Heroku).
✔️ Your production MCP servers are now effortlessly available via REST APIs!
## 🧑‍💻 Technical Details and Background
### 🍃 How It Works (Technical Summary)
- **Dynamic Schema Discovery & Endpoints:** At server startup, the proxy connects to the MCP server to query available tools. It automatically builds FastAPI endpoints based on the MCP tool schemas, creating concise and clear REST endpoints.
- **OpenAPI Auto-documentation:** Endpoints generated are seamlessly documented and available via FastAPI's built-in Swagger UI (`/docs`). No extra doc writing required.
- **Asynchronous & Performant**: Built on robust asynchronous libraries, ensuring speed and reliability for concurrent users.
### 📚 Under the Hood:
- FastAPI (Automatic routing & docs generation)
- MCP Client (Standard MCP integration & schema discovery)
- Standard JSON over HTTP (Easy integration)
## ⚡️ Why is the MCP-to-OpenAPI Proxy Superior?
Here's why leveraging MCP servers through OpenAPI via the proxy approach is significantly better and why Open WebUI enthusiastically supports it:
- **User-friendly & Familiar Interface**: No custom clients; just HTTP REST endpoints you already know.
- **Instant Integration**: Immediately compatible with thousands of existing REST/OpenAPI tools, SDKs, and services.
- **Powerful & Automatic Docs**: Built-in Swagger UI documentation is automatically generated, always accurate, and maintained.
- **No New Protocol overhead**: Eliminates the necessity to directly handle MCP-specific protocol complexities and socket communication issues.
- **Battle-Tested Security & Stability**: Inherits well-established HTTPS transport, standard auth methods (JWT, API keys), solid async libraries, and FastAPIs proven robustness.
- **Future-Proof**: MCP proxy uses existing, stable, standard REST/OpenAPI formats guaranteed long-term community support and evolution.
🌟 **Bottom line:** MCP-to-OpenAPI makes your powerful MCP-based AI tools broadly accessible through intuitive, reliable, and scalable REST endpoints. Open WebUI proudly supports and recommends this best-in-class approach.
## 📢 Community & Support
- For questions, suggestions, or feature requests, please use our [GitHub Issue tracker](https://github.com/open-webui/openapi-servers/issues) or join our [Community Discussions](https://github.com/open-webui/openapi-servers/discussions).
Happy integrations! 🌟🚀

View File

@@ -0,0 +1,211 @@
---
sidebar_position: 1
title: "Open WebUI Integration"
---
## Overview
Open WebUI v0.6+ supports seamless integration with external tools via the OpenAPI servers — meaning you can easily extend your LLM workflows using custom or community-powered tool servers 🧰.
In this guide, you'll learn how to launch an OpenAPI-compatible tool server and connect it to Open WebUI through the intuitive user interface. Lets get started! 🚀
---
## Step 1: Launch an OpenAPI Tool Server
To begin, you'll need to start one of the reference tool servers available in the [openapi-servers repo](https://github.com/open-webui/openapi-servers). For quick testing, well use the time tool server as an example.
🛠️ Example: Starting the `time` server locally
```bash
git clone https://github.com/open-webui/openapi-servers
cd openapi-servers
# Navigate to the time server
cd servers/time
# Install required dependencies
pip install -r requirements.txt
# Start the server
uvicorn main:app --host 0.0.0.0 --reload
```
Once running, this will host a local OpenAPI server at http://localhost:8000, which you can point Open WebUI to.
![Time Server](/images/openapi-servers/open-webui/time-server.png)
---
## Step 2: Connect Tool Server in Open WebUI
Next, connect your running tool server to Open WebUI:
1. Open WebUI in your browser.
2. Open ⚙️ **Settings**.
3. Click on **Tools** to add a new tool server.
4. Enter the URL where your OpenAPI tool server is running (e.g., http://localhost:8000).
5. Click "Save".
![Settings Page](/images/openapi-servers/open-webui/settings.png)
### 🧑‍💻 User Tool Servers vs. 🛠️ Global Tool Servers
There are two ways to register tool servers in Open WebUI:
#### 1. User Tool Servers (added via regular Settings)
- Only accessible to the user who registered the tool server.
- The connection is made directly from the browser (client-side) by the user.
- Perfect for personal workflows or when testing custom/local tools.
#### 2. Global Tool Servers (added via Admin Settings)
Admins can manage shared tool servers available to all or selected users across the entire deployment:
- Go to 🛠️ **Admin Settings > Tools**.
- Add the tool server URL just as you would in user settings.
- These tools are treated similarly to Open WebUIs built-in tools.
#### Main Difference: Where Are Requests Made From?
The primary distinction between **User Tool Servers** and **Global Tool Servers** is where the API connection and requests are actually made:
- **User Tool Servers**
- Requests to the tool server are performed **directly from your browser** (the client).
- This means you can safely connect to localhost URLs (like `http://localhost:8000`)—even exposing private or development-only endpoints such as your local filesystem or dev tools—without risking exposure to the wider internet or other users.
- Your connection is isolated; only your browser can access that tool server.
- **Global Tool Servers**
- Requests are sent **from the Open WebUI backend/server** (not your browser).
- The backend must be able to reach the tool server URL you specify—so `localhost` means the backend server's localhost, *not* your computer's.
- Use this for sharing tools with other users across the deployment, but be mindful: since the backend makes the requests, you cannot access your personal local resources (like your own filesystem) through this method.
- Think security! Only expose remote/global endpoints that are safe and meant to be accessed by multiple users.
**Summary Table:**
| Tool Server Type | Request Origin | Use Localhost? | Use Case Example |
| ------------------ | -------------------- | ------------------ | ---------------------------------------- |
| User Tool Server | User's Browser (Client-side) | Yes (private to you) | Personal tools, local dev/testing |
| Global Tool Server | Open WebUI Backend (Server-side) | No (unless running on the backend itself) | Team/shared tools, enterprise integrations |
:::tip
User Tool Servers are best for personal or experimental tools, especially those running on your own machine, while Global Tool Servers are ideal for production or shared environments where everyone needs access to the same tools.
:::
### 👉 Optional: Using a Config File with mcpo
If you're running multiple tools through mcpo using a config file, take note:
🧩 Each tool is mounted under its own unique path!
For example, if youre using memory and time tools simultaneously through mcpo, theyll each be available at a distinct route:
- http://localhost:8000/time
- http://localhost:8000/memory
This means:
- When connecting a tool in Open WebUI, you must enter the full route to that specific tool — do NOT enter just the root URL (http://localhost:8000).
- Add each tool individually in Open WebUI Settings using their respective subpath URLs.
![MCPO Config Tools Setting](/images/openapi-servers/open-webui/mcpo-config-tools.png)
✅ Good:
http://localhost:8000/time
http://localhost:8000/memory
🚫 Not valid:
http://localhost:8000
This ensures Open WebUI recognizes and communicates with each tool server correctly.
---
## Step 3: Confirm Your Tool Server Is Connected ✅
Once your tool server is successfully connected, Open WebUI will display a 👇 tool server indicator directly in the message input area:
📍 You'll now see this icon below the input box:
![Tool Server Indicator](/images/openapi-servers/open-webui/message-input.png)
Clicking this icon opens a popup where you can:
- View connected tool server information
- See which tools are available and which server they're provided by
- Debug or disconnect any tool if needed
🔍 Heres what the tool information modal looks like:
![Tool Info Modal Expanded](/images/openapi-servers/open-webui/info-modal.png)
### 🛠️ Global Tool Servers Look Different — And Are Hidden by Default!
If you've connected a Global Tool Server (i.e., one thats admin-configured), it will not appear automatically in the input area like user tool servers do.
Instead:
- Global tools are hidden by default and must be explicitly activated per user.
- To enable them, you'll need to click on the button in the message input area (bottom left of the chat box), and manually toggle on the specific global tool(s) you want to use.
Heres what that looks like:
![Global Tool Server Message Input](/images/openapi-servers/open-webui/global-message-input.png)
⚠️ Important Notes for Global Tool Servers:
- They will not show up in the tool indicator popup until enabled from the menu.
- Each global tool must be individually toggled on to become active inside your current chat.
- Once toggled on, they function the same way as user tools.
- Admins can control access to global tools via role-based permissions.
This is ideal for team setups or shared environments, where commonly-used tools (e.g., document search, memory, or web lookup) should be centrally accessible by multiple users.
---
## (Optional) Step 4: Use "Native" Function Calling (ReACT-style) Tool Use 🧠
:::info
For this to work effectively, **your selected model must support native tool calling**. Some local models claim support but often produce poor results. We strongly recommend using GPT-4o or another OpenAI model that supports function calling natively for the best experience.
:::
Want to enable ReACT-style (Reasoning + Acting) native function calls directly inside your conversations? You can switch Open WebUI to use native function calling.
✳️ How to enable native function calling:
1. Open the chat window.
2. Go to ⚙️ **Chat Controls > Advanced Params**.
3. Change the **Function Calling** parameter from `Default` to `Native`.
![Native Tool Call](/images/openapi-servers/open-webui/native.png)
---
## Need More Tools? Explore & Expand! 🧱
The [openapi-servers repo](https://github.com/open-webui/openapi-servers) includes a variety of useful reference servers:
- 📂 Filesystem access
- 🧠 Memory & knowledge graphs
- 🗃️ Git repo browsing
- 🌎 Web search (WIP)
- 🛢️ Database querying (WIP)
You can run any of these in the same way and connect them to Open WebUI by repeating the steps above.
---
## Troubleshooting & Tips 🧩
- ❌ Not connecting? Make sure the URL is correct and accessible from the browser used to run Open WebUI.
- 🔒 If you're using remote servers, check firewalls and HTTPS configs!
- 📝 To make servers persist, consider deploying them in Docker or with system services.
Need help? Visit the 👉 [Discussions page](https://github.com/open-webui/openapi-servers/discussions) or [open an issue](https://github.com/open-webui/openapi-servers/issues).