Update starting-with-openai-compatible.mdx

This commit is contained in:
DrMelone
2025-12-20 21:14:47 +01:00
parent a8efeadb8f
commit 720b1aec38

View File

@@ -69,6 +69,25 @@ If running Open WebUI in Docker and your model server on your host machine, use
---
## Required API Endpoints
To ensure full compatibility with Open WebUI, your server should implement the following OpenAI-standard endpoints:
| Endpoint | Method | Required? | Purpose |
| :--- | :--- | :--- | :--- |
| `/v1/models` | `GET` | **Yes** | Used for model discovery and selecting models in the UI. |
| `/v1/chat/completions` | `POST` | **Yes** | The core endpoint for chat, supporting streaming and parameters like temperature. |
| `/v1/embeddings` | `POST` | No | Required if you want to use this provider for RAG (Retrieval Augmented Generation). |
| `/v1/audio/speech` | `POST` | No | Required for Text-to-Speech (TTS) functionality. |
| `/v1/audio/transcriptions` | `POST` | No | Required for Speech-to-Text (STT/Whisper) functionality. |
| `/v1/images/generations` | `POST` | No | Required for Image Generation (DALL-E) functionality. |
### Supported Parameters
Open WebUI passes standard OpenAI parameters such as `temperature`, `top_p`, `max_tokens` (or `max_completion_tokens`), `stop`, `seed`, and `logit_bias`. It also supports **Tool Use** (Function Calling) if your model and server support the `tools` and `tool_choice` parameters.
---
## Step 3: Start Chatting!
Select your connected servers model in the chat menu and get started!