mirror of
https://github.com/LibreChat-AI/librechat.ai.git
synced 2026-03-27 10:48:32 +07:00
* chore: update documentation for project architecture, coding standards, and testing procedures - Updated the project architecture overview to clarify monorepo structure and workspace boundaries. - Expanded coding standards section to include detailed guidelines on workspace boundaries, code structure, iteration, performance, type safety, and documentation practices. - Improved testing documentation by specifying how to run tests per workspace and providing clearer instructions for local unit tests. * fix: remove outdated blog post link in Agents API documentation - Removed the link to the blog post regarding the Open Responses decision, as it is no longer relevant. This update ensures the documentation remains concise and focused on current API frameworks.
127 lines
4.5 KiB
Plaintext
127 lines
4.5 KiB
Plaintext
---
|
|
title: Agents API (Beta)
|
|
icon: Plug
|
|
description: Access LibreChat agents programmatically via OpenAI-compatible and Open Responses API endpoints
|
|
---
|
|
|
|
<Callout type="warning" title="Beta Feature">
|
|
The Agents API is currently in beta. Endpoints, request/response formats, and behavior may change as we iterate toward a stable release.
|
|
</Callout>
|
|
|
|
LibreChat exposes your agents through two API-compatible interfaces, allowing external applications, scripts, and services to interact with your agents programmatically.
|
|
|
|
## Overview
|
|
|
|
The Agents API provides two interfaces:
|
|
|
|
- **OpenAI-compatible Chat Completions** — `POST /api/agents/v1/chat/completions`
|
|
- **Open Responses API** — `POST /api/agents/v1/responses`
|
|
|
|
Both are authenticated via API keys and support streaming responses, making it easy to integrate LibreChat agents into existing workflows that already use OpenAI SDKs or similar tooling.
|
|
|
|
LibreChat is adopting [Open Responses](https://www.openresponses.org/) as its primary API framework for serving agents. While the Chat Completions endpoint provides backward compatibility with existing OpenAI-compatible tooling, the Open Responses endpoint represents the future direction.
|
|
|
|
## Enabling the Agents API
|
|
|
|
The Agents API is gated behind the `remoteAgents` interface configuration. All permissions default to `false`.
|
|
|
|
```yaml filename="librechat.yaml"
|
|
interface:
|
|
remoteAgents:
|
|
use: true
|
|
create: true
|
|
```
|
|
|
|
See [Interface Configuration — remoteAgents](/docs/configuration/librechat_yaml/object_structure/interface#remoteagents) for all available options.
|
|
|
|
**Note:** Admin users have all remote agent permissions enabled by default.
|
|
|
|
## API Key Management
|
|
|
|
Once `remoteAgents.use` and `remoteAgents.create` are enabled, users can generate API keys from the LibreChat UI. These keys authenticate requests to the Agents API.
|
|
|
|
## Endpoints
|
|
|
|
### Chat Completions (OpenAI-compatible)
|
|
|
|
```
|
|
POST /api/agents/v1/chat/completions
|
|
```
|
|
|
|
Use any OpenAI-compatible SDK by pointing it at your LibreChat instance. The `model` parameter corresponds to an agent ID.
|
|
|
|
**Example with curl:**
|
|
```bash
|
|
curl -X POST https://your-librechat-instance/api/agents/v1/chat/completions \
|
|
-H "Authorization: Bearer YOUR_API_KEY" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"model": "agent_abc123",
|
|
"messages": [
|
|
{"role": "user", "content": "Hello, what can you help me with?"}
|
|
],
|
|
"stream": true
|
|
}'
|
|
```
|
|
|
|
**Example with OpenAI SDK (Python):**
|
|
```python
|
|
from openai import OpenAI
|
|
|
|
client = OpenAI(
|
|
base_url="https://your-librechat-instance/api/agents/v1",
|
|
api_key="YOUR_API_KEY"
|
|
)
|
|
|
|
response = client.chat.completions.create(
|
|
model="agent_abc123",
|
|
messages=[{"role": "user", "content": "Hello!"}],
|
|
stream=True
|
|
)
|
|
|
|
for chunk in response:
|
|
print(chunk.choices[0].delta.content, end="")
|
|
```
|
|
|
|
### List Models
|
|
|
|
```
|
|
GET /api/agents/v1/models
|
|
```
|
|
|
|
Returns available agents as models. Useful for discovering which agents are accessible with your API key.
|
|
|
|
### Open Responses API
|
|
|
|
```
|
|
POST /api/agents/v1/responses
|
|
```
|
|
|
|
The Open Responses endpoint follows the [Open Responses specification](https://www.openresponses.org/), an open inference standard initiated by OpenAI and built by the open-source AI community. It is designed for agentic workflows with native support for reasoning, tool use, structured outputs, and streaming semantic events.
|
|
|
|
```bash
|
|
curl -X POST https://your-librechat-instance/api/agents/v1/responses \
|
|
-H "Authorization: Bearer YOUR_API_KEY" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"model": "agent_abc123",
|
|
"input": "What is the weather today?"
|
|
}'
|
|
```
|
|
|
|
## Token Usage Tracking
|
|
|
|
All Agents API requests track token usage against the user's balance (when token spending is configured). Both input and output tokens are counted, including cache tokens for providers that support them (OpenAI, Anthropic).
|
|
|
|
## Roadmap
|
|
|
|
- **Open Responses as primary interface** — We plan to expand the Open Responses endpoint with full support for agentic loops, tool orchestration, and streaming semantic events.
|
|
- **Anthropic Messages API** — We may add support for the Anthropic Messages API format as an additional interface in the future.
|
|
|
|
## Related Documentation
|
|
|
|
- [Agents](/docs/features/agents) — Creating and configuring agents
|
|
- [Interface Configuration — remoteAgents](/docs/configuration/librechat_yaml/object_structure/interface#remoteagents) — Access control settings
|
|
- [Token Usage](/docs/configuration/token_usage) — Configuring token spending and balance
|
|
- [Open Responses Specification](https://www.openresponses.org/) — The open inference standard
|