diff --git a/docs/faq.mdx b/docs/faq.mdx index 25d3448d..90f142f1 100644 --- a/docs/faq.mdx +++ b/docs/faq.mdx @@ -225,15 +225,63 @@ For more optimization tips, see the **[Performance Tips Guide](troubleshooting/p ### Q: Is MCP (Model Context Protocol) supported in Open WebUI? -**A:** Yes, Open WebUI now includes **native support for MCP Streamable HTTP**, enabling direct, first-class integration with MCP tools that communicate over the standard HTTP transport. For any **other MCP transports or non-HTTP implementations**, you should use our official proxy adapter, **MCPO**, available at 👉 [https://github.com/open-webui/mcpo](https://github.com/open-webui/mcpo). MCPO provides a unified OpenAPI-compatible layer that bridges alternative MCP transports into Open WebUI safely and consistently. This architecture ensures maximum compatibility, strict security boundaries, and predictable tool behavior across different environments while keeping Open WebUI backend-agnostic and maintainable. +**A:** Yes, Open WebUI includes **native support for MCP Streamable HTTP**, enabling direct, first-class integration with MCP tools that communicate over the standard HTTP transport. For any **other MCP transports or non-HTTP implementations**, you should use our official proxy adapter, **MCPO**, available at 👉 [https://github.com/open-webui/mcpo](https://github.com/open-webui/mcpo). MCPO provides a unified OpenAPI-compatible layer that bridges alternative MCP transports into Open WebUI safely and consistently. This architecture ensures maximum compatibility, strict security boundaries, and predictable tool behavior across different environments while keeping Open WebUI backend-agnostic and maintainable. -### Q: Why doesn't Open WebUI support [Specific Provider]'s latest API (e.g. OpenAI Responses API)? +### Q: Why doesn't Open WebUI natively support [Provider X]'s proprietary API? -**A:** Open WebUI is built around **universal protocols**, not specific providers. Our core philosophy is to support standard, widely-adopted APIs like the **OpenAI Chat Completions protocol**. +**A:** Open WebUI is highly modular with a plugin system including tools, functions, and most notably **[pipes](/features/plugin/functions/pipe)**. These modular pipes allow you to add support for virtually any provider you want—you can build your own or choose from the many [community-built](https://openwebui.com/) and usually well-maintained ones already available. -This protocol-centric design ensures that Open WebUI remains backend-agnostic and compatible with dozens of providers (like OpenRouter, LiteLLM, vLLM, and Groq) simultaneously. We avoid implementing proprietary, provider-specific APIs (such as OpenAI's stateful Responses API or Anthropic's Messages API) to prevent unsustainable architectural bloat and to maintain a truly open ecosystem. +That said, Open WebUI's core is built around **universal protocols**, not specific providers. Our stance is to support standard, widely-adopted APIs like the **OpenAI Chat Completions protocol**. -If you need functionality exclusive to a proprietary API (like OpenAI's hidden reasoning traces), we recommend using a proxy like **LiteLLM** or **OpenRouter**, which translate those proprietary features into the standard Chat Completions protocol that Open WebUI supports. +This protocol-centric design ensures that Open WebUI remains backend-agnostic and compatible with dozens of providers simultaneously. We avoid implementing proprietary, provider-specific APIs in the core to prevent unsustainable architectural bloat and to maintain a truly open ecosystem. + +:::note Experimental: Open Responses +As new standards emerge that gain broad adoption, we may add experimental support. Connections can now optionally be configured to use **[Open Responses](https://www.openresponses.org/)**—an open specification for multi-provider interoperability with consistent streaming events and tool use patterns. +::: + +We understand this request comes up frequently, especially for major providers. Here's why we've made this deliberate architectural decision: + +#### 1. The Cascading Demand Problem + +Supporting one proprietary API sets a precedent. Once that precedent exists, every other major provider becomes a reasonable request. What starts as "just one provider" quickly becomes many integrations, each with their own quirks, authentication schemes, and breaking changes. + +#### 2. Maintenance is the Real Burden + +Adding integration code is the easy part. **Maintaining it forever** is where the real cost lies: +- Each provider updates their API independently—when a provider changes something, we must update and test immediately +- Changes in one integration can break compatibility with others +- Every integration requires ongoing testing across multiple scenarios +- Bug reports flood in for each provider whenever they make changes + +Contributors are **volunteers with full-time jobs**. Asking them to maintain 10+ provider integrations indefinitely is not sustainable. + +#### 3. Technical Complexity + +Each provider has different approaches to: +- Reasoning/thinking content format and structure +- Authentication and request signing +- Error handling and rate limiting + +This requires provider-specific logic throughout both the backend and frontend, significantly increasing the codebase complexity and tech debt. + +#### 4. Scale and Stability Requirements + +Open WebUI is used by major organizations worldwide. At this scale, stability is paramount—extensive testing and backwards compatibility guarantees become exponentially harder with each added provider. + +#### 5. Pipes are the Modular Solution + +The pipes architecture exists precisely to solve this problem. One-click install a community pipe and you get full provider API support. This is exactly the modularity that allows: +- Community members to maintain provider-specific integrations +- Users to choose only what they need +- The core project to remain stable and maintainable + +:::tip The Recommended Path +For providers that don't follow widely adopted API standards, use: +- **[Open WebUI community](https://openwebui.com/)**: Community-maintained provider integrations (one-click install) +- **Middleware proxies**: Tools like LiteLLM or OpenRouter can translate proprietary APIs to widely adopted API formats + +These solutions exist specifically to bridge the gap, and they're maintained by teams dedicated to that purpose. +::: ### Q: Why is the frontend integrated into the same Docker image? Isn't this unscalable or problematic? diff --git a/docs/features/chat-features/reasoning-models.mdx b/docs/features/chat-features/reasoning-models.mdx index 5233cf59..a39a4912 100644 --- a/docs/features/chat-features/reasoning-models.mdx +++ b/docs/features/chat-features/reasoning-models.mdx @@ -347,7 +347,7 @@ Open WebUI serializes reasoning as text wrapped in tags (e.g., `... OpenAI > Manage** (look for the wrench icon). +3. Click ➕ **Add New Connection** or edit an existing connection. + +--- + +## Step 2: Select the API Type + +In the connection modal, look for the **API Type** setting: + +- **Chat Completions** (default): Uses the standard OpenAI Chat Completions API format. +- **Responses** (experimental): Uses the Open Responses API format. + +Click the toggle to switch to **Responses**. You'll see an "Experimental" label indicating this feature is still in development. + +![API Type Toggle](/images/getting-started/quick-start/api-type-responses.gif) + +--- + +## Step 3: Configure Your Provider + +Enter the connection details for a provider that supports the Open Responses format: + +- **URL**: Your provider's API endpoint +- **API Key**: Your authentication credentials +- **Model IDs**: (Optional) Specify specific models to use + +--- + +## Supported Providers + +Open Responses is a new specification, so provider support is still growing. Check the [Open Responses website](https://www.openresponses.org/) for the latest list of compliant providers and implementations. + +--- + +## Troubleshooting + +### Connection works with Chat Completions but not Responses + +Not all providers support the Open Responses format yet. Try: + +1. Switching back to Chat Completions +2. Checking if your provider explicitly supports Open Responses +3. Using a middleware proxy that can translate to Open Responses format + +### Streaming or tool calls behave unexpectedly + +This feature is experimental. If you encounter issues: + +1. Check the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s) for known issues +2. Report bugs on [GitHub](https://github.com/open-webui/open-webui/issues) + +--- + +## Learn More + +- **[Open Responses Specification](https://www.openresponses.org/specification)**: Full technical specification +- **[Open Responses GitHub](https://github.com/openresponses/openresponses)**: Source code and discussions +- **[FAQ: Why doesn't Open WebUI natively support proprietary APIs?](/faq#q-why-doesnt-open-webui-natively-support-provider-xs-proprietary-api)**: Learn about our protocol-first philosophy diff --git a/docs/getting-started/quick-start/starting-with-openai-compatible.mdx b/docs/getting-started/quick-start/starting-with-openai-compatible.mdx index a4b4de7c..daa03f9e 100644 --- a/docs/getting-started/quick-start/starting-with-openai-compatible.mdx +++ b/docs/getting-started/quick-start/starting-with-openai-compatible.mdx @@ -17,10 +17,12 @@ Open WebUI is built around **Standard Protocols**. Instead of building specific This means that while Open WebUI handles the **interface and tools**, it expects your backend to follow the universal Chat Completions standard. -- **We Support Protocols**: Any provider that follows the OpenAI Chat Completions standard (like Groq, OpenRouter, or LiteLLM) is natively supported. -- **We Avoid Proprietary APIs**: We do not implement provider-specific, non-standard APIs (such as OpenAI's stateful Responses API or Anthropic's native Messages API) to maintain a universal, maintainable codebase. +- **We Support Protocols**: Any provider that follows widely adopted API standards (like Groq, OpenRouter, OpenAI, Mistral, Perplexity and many more) is natively supported. We also have experimental support for **[Open Responses](https://www.openresponses.org/)**. +- **We Avoid Proprietary APIs**: We do not implement provider-specific, non-standard APIs in the core to maintain a universal, maintainable codebase. For unsupported providers, use a pipe or middleware proxy. -If you are using a provider that requires a proprietary API, we recommend using a proxy tool like **LiteLLM** or **OpenRouter** to bridge them to the standard OpenAI protocol supported by Open WebUI. +If you are using a provider that requires a proprietary API, we recommend using a **[pipe](/features/plugin/functions/pipe)** (browse [community pipes](https://openwebui.com/)) or a middleware proxy like LiteLLM or OpenRouter to bridge them to a supported protocol. + +For a detailed explanation of why we've made this architectural decision, see our **[FAQ: Why doesn't Open WebUI natively support proprietary APIs?](/faq#q-why-doesnt-open-webui-natively-support-provider-xs-proprietary-api)** ### Popular Compatible Servers and Providers diff --git a/docs/getting-started/quick-start/starting-with-openai.mdx b/docs/getting-started/quick-start/starting-with-openai.mdx index d8485c44..7b86d6e5 100644 --- a/docs/getting-started/quick-start/starting-with-openai.mdx +++ b/docs/getting-started/quick-start/starting-with-openai.mdx @@ -13,9 +13,9 @@ Open WebUI makes it easy to connect and use OpenAI and other OpenAI-compatible A ## Important: Protocols, Not Providers -Open WebUI is a **protocol-centric** platform. While we provide first-class support for OpenAI models, we do so exclusively through the **OpenAI Chat Completions API protocol**. +Open WebUI is a **protocol-centric** platform. While we provide first-class support for OpenAI models, we do so mainly through the **OpenAI Chat Completions API protocol**. -We do **not** support proprietary, non-standard APIs such as OpenAI’s new stateful **Responses API**. Instead, Open WebUI focuses on universal standards that are shared across dozens of providers. This approach keeps Open WebUI fast, stable, and truly open-sourced. +We focus on universal standards shared across dozens of providers, with experimental support for emerging standards like **[Open Responses](https://www.openresponses.org/)**. For a detailed explanation, see our **[FAQ on protocol support](/faq#q-why-doesnt-open-webui-natively-support-provider-xs-proprietary-api)**. --- diff --git a/docs/getting-started/quick-start/starting-with-vllm.mdx b/docs/getting-started/quick-start/starting-with-vllm.mdx index 90a4f1b1..adf2ba41 100644 --- a/docs/getting-started/quick-start/starting-with-vllm.mdx +++ b/docs/getting-started/quick-start/starting-with-vllm.mdx @@ -5,7 +5,11 @@ title: "Starting With vLLM" ## Overview -vLLM provides an OpenAI-compatible API, making it easy to connect to Open WebUI. This guide will show you how to connect your vLLM server. +vLLM provides an **OpenAI-compatible API** (Chat Completions), making it easy to connect to Open WebUI. This guide will show you how to connect your vLLM server. + +:::tip +Open WebUI also supports the experimental [Open Responses](/getting-started/quick-start/starting-with-open-responses) specification for providers that implement it. +::: --- diff --git a/static/images/getting-started/quick-start/api-type-responses.gif b/static/images/getting-started/quick-start/api-type-responses.gif new file mode 100644 index 00000000..d3daae0e Binary files /dev/null and b/static/images/getting-started/quick-start/api-type-responses.gif differ