Merge pull request #824 from hhhhhhhhhhhhhhhhho/main

This commit is contained in:
Classic298
2025-11-17 18:18:35 +01:00
committed by GitHub
3 changed files with 40 additions and 2 deletions

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 5
sidebar_position: 6
title: "Getting Started with Functions"
---

View File

@@ -1,6 +1,6 @@
---
sidebar_position: 4
sidebar_position: 5
title: "Starting with OpenAI-Compatible Servers"
---

View File

@@ -0,0 +1,38 @@
---
sidebar_position: 4
title: "Starting With vLLM"
---
## Overview
vLLM provides an OpenAI-compatible API, making it easy to connect to Open WebUI. This guide will show you how to connect your vLLM server.
---
## Step 1: Set Up Your vLLM Server
Make sure your vLLM server is running and accessible. The default API base URL is typically:
```
http://localhost:8000/v1
```
For remote servers, use the appropriate hostname or IP address.
---
## Step 2: Add the API Connection in Open WebUI
1. Go to ⚙️ **Admin Settings**.
2. Navigate to **Connections > OpenAI > Manage** (look for the wrench icon).
3. Click **Add New Connection**.
4. Fill in the following:
- **API URL**: `http://localhost:8000/v1` (or your vLLM server URL)
- **API Key**: Leave empty (vLLM typically doesn't require an API key for local connections)
5. Click **Save**.
---
## Step 3: Start Using Models
Select any model that's available on your vLLM server from the Model Selector and start chatting.