mirror of
https://github.com/open-webui/docs.git
synced 2025-12-12 07:29:49 +07:00
Merge pull request #825 from open-webui/main
This commit is contained in:
@@ -1,5 +1,5 @@
|
||||
---
|
||||
sidebar_position: 5
|
||||
sidebar_position: 6
|
||||
title: "Getting Started with Functions"
|
||||
---
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
|
||||
sidebar_position: 4
|
||||
sidebar_position: 5
|
||||
title: "Starting with OpenAI-Compatible Servers"
|
||||
|
||||
---
|
||||
|
||||
38
docs/getting-started/quick-start/starting-with-vllm.mdx
Normal file
38
docs/getting-started/quick-start/starting-with-vllm.mdx
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
title: "Starting With vLLM"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
vLLM provides an OpenAI-compatible API, making it easy to connect to Open WebUI. This guide will show you how to connect your vLLM server.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Set Up Your vLLM Server
|
||||
|
||||
Make sure your vLLM server is running and accessible. The default API base URL is typically:
|
||||
|
||||
```
|
||||
http://localhost:8000/v1
|
||||
```
|
||||
|
||||
For remote servers, use the appropriate hostname or IP address.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Add the API Connection in Open WebUI
|
||||
|
||||
1. Go to ⚙️ **Admin Settings**.
|
||||
2. Navigate to **Connections > OpenAI > Manage** (look for the wrench icon).
|
||||
3. Click ➕ **Add New Connection**.
|
||||
4. Fill in the following:
|
||||
- **API URL**: `http://localhost:8000/v1` (or your vLLM server URL)
|
||||
- **API Key**: Leave empty (vLLM typically doesn't require an API key for local connections)
|
||||
5. Click **Save**.
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Start Using Models
|
||||
|
||||
Select any model that's available on your vLLM server from the Model Selector and start chatting.
|
||||
Reference in New Issue
Block a user