chore: format

This commit is contained in:
silentoplayz
2025-10-05 23:29:50 -04:00
parent c4ada90210
commit 3e3da9e0a0
152 changed files with 2247 additions and 1677 deletions

View File

@@ -4,12 +4,14 @@ title: "🔄 Backend-Controlled, UI-Compatible API Flow"
---
:::warning
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the [contributing tutorial](/docs/tutorials/tips/contributing-tutorial.md).
:::
---
# Backend-Controlled, UI-Compatible API Flow
## Backend-Controlled, UI-Compatible API Flow
This tutorial demonstrates how to implement server-side orchestration of Open WebUI conversations while ensuring that assistant replies appear properly in the frontend UI. This approach requires zero frontend involvement and allows complete backend control over the chat flow.
This tutorial has been verified to work with Open WebUI version v0.6.15. Future versions may introduce changes in behavior or API structure.
@@ -49,6 +51,7 @@ This enables server-side orchestration while still making replies show up in the
The assistant message needs to be added to the chat response object in memory as a critical prerequisite before triggering the completion. This step is essential because the Open WebUI frontend expects assistant messages to exist in a specific structure.
The assistant message must appear in both locations:
- `chat.messages[]` - The main message array
- `chat.history.messages[<assistantId>]` - The indexed message history
@@ -122,8 +125,12 @@ public void enrichChatWithAssistantMessage(OWUIChatResponse chatResponse, String
}
```
:::note
**Note:** This step can be performed in memory on the response object, or combined with Step 1 by including both user and empty assistant messages in the initial chat creation.
:::
### Step 3: Update Chat with Assistant Message
Send the enriched chat state containing both user and assistant messages to the server:
@@ -272,9 +279,11 @@ curl -X POST https://<host>/api/chat/completions \
Assistant responses can be handled in two ways depending on your implementation needs:
#### Option A: Stream Processing (Recommended)
If using `stream: true` in the completion request, you can process the streamed response in real-time and wait for the stream to complete. This is the approach used by the OpenWebUI web interface and provides immediate feedback.
#### Option B: Polling Approach
For implementations that cannot handle streaming, poll the chat endpoint until the response is ready. Use a retry mechanism with exponential backoff:
```java
@@ -300,17 +309,18 @@ public String getAssistantResponseWhenReady(String chatId, ChatCompletedRequest
For manual polling, you can use:
```bash
# Poll every few seconds until assistant content is populated
while true; do
response=$(curl -s -X GET https://<host>/api/v1/chats/<chatId> \
-H "Authorization: Bearer <token>")
# Check if assistant message has content (response is ready)
if echo "$response" | jq '.chat.messages[] | select(.role=="assistant" and .id=="assistant-msg-id") | .content' | grep -v '""' > /dev/null; then
echo "Assistant response is ready!"
break
fi
echo "Waiting for assistant response..."
sleep 2
done
@@ -404,6 +414,7 @@ curl -X POST https://<host>/api/v1/chats/<chatId> \
Assistant responses may be wrapped in markdown code blocks. Here's how to clean them:
```bash
# Example raw response from assistant
raw_response='```json
{
@@ -419,6 +430,7 @@ echo "$cleaned_response" | jq '.'
```
This cleaning process handles:
- Removal of ````json` prefix
- Removal of ```` suffix
- Trimming whitespace
@@ -797,24 +809,29 @@ This cleaning process handles:
#### Required vs Optional Fields
**Chat Creation - Required Fields:**
- `title` - Chat title (string)
- `models` - Array of model names (string[])
- `messages` - Initial message array
**Chat Creation - Optional Fields:**
- `files` - Knowledge files for RAG (defaults to empty array)
- `tags` - Chat tags (defaults to empty array)
- `params` - Model parameters (defaults to empty object)
**Message Structure - User Message:**
- **Required:** `id`, `role`, `content`, `timestamp`, `models`
- **Optional:** `parentId` (for threading)
**Message Structure - Assistant Message:**
- **Required:** `id`, `role`, `content`, `parentId`, `modelName`, `modelIdx`, `timestamp`
- **Optional:** Additional metadata fields
**ChatCompletionsRequest - Required Fields:**
- `chat_id` - Target chat ID
- `id` - Assistant message ID
- `messages` - Array of ChatCompletionMessage
@@ -822,6 +839,7 @@ This cleaning process handles:
- `session_id` - Session identifier
**ChatCompletionsRequest - Optional Fields:**
- `stream` - Enable streaming (defaults to false)
- `background_tasks` - Control automatic tasks
- `features` - Enable/disable features
@@ -832,22 +850,27 @@ This cleaning process handles:
#### Field Constraints
**Timestamps:**
- Format: Unix timestamp in milliseconds
- Example: `1720000000000` (July 4, 2024, 00:00:00 UTC)
**UUIDs:**
- All ID fields should use valid UUID format
- Example: `550e8400-e29b-41d4-a716-446655440000`
**Model Names:**
- Must match available models in your Open WebUI instance
- Common examples: `gpt-4o`, `gpt-3.5-turbo`, `claude-3-sonnet`
**Session IDs:**
- Can be any unique string identifier
- Recommendation: Use UUID format for consistency
**Knowledge File Status:**
- Valid values: `"processed"`, `"processing"`, `"error"`
- Only use `"processed"` files for completions
@@ -878,6 +901,7 @@ Use the Open WebUI backend APIs to:
7. **Fetch the final chat** - Retrieve and parse the completed conversation
**Enhanced Capabilities:**
- **RAG Integration** - Include knowledge collections for context-aware responses
- **Asynchronous Processing** - Handle long-running AI operations with streaming or polling
- **Response Parsing** - Clean and validate JSON responses from the assistant
@@ -890,6 +914,7 @@ The key advantage of this approach is that it maintains full compatibility with
## Testing
You can test your implementation by following the step-by-step CURL examples provided above. Make sure to replace placeholder values with your actual:
- Host URL
- Authentication token
- Chat IDs
@@ -897,5 +922,7 @@ You can test your implementation by following the step-by-step CURL examples pro
- Model names
:::tip
Start with a simple user message and gradually add complexity like knowledge integration and advanced features once the basic flow is working.
:::