mirror of
https://github.com/LibreChat-AI/librechat.ai.git
synced 2026-03-27 10:48:32 +07:00
* chore: update GitHub Actions workflow to use latest action version for improved stability * chore: update roadmap references and enhance documentation for AWS Bedrock inference profiles - Updated footer menu and card icons to reflect the 2026 roadmap. - Adjusted default values in changelog and configuration documentation for `maxRecursionLimit`. - Added comprehensive documentation for AWS Bedrock inference profiles, including setup, configuration, and examples. - Introduced Agents API documentation for programmatic access to LibreChat agents. - Enhanced existing documentation for clarity and consistency across various sections. * feat: release Config v1.3.4 with new features and updates - Introduced `create` field in `interface.prompts` and `interface.agents` for enhanced user control. - Added `interface.remoteAgents` configuration for managing remote agent permissions. - Updated `endpoints.bedrock` with `models` and `inferenceProfiles` for better customization. - Included Moonshot as a known endpoint for AI integration. - Introduced new agent capabilities: `deferred_tools` and `programmatic_tools`. - Removed deprecated `forcePrompt` setting from configurations. - Updated default model lists and added support for new models. - Enhanced `modelSpecs` with `artifacts` field and `effort` parameter for Anthropic models. * refactor: update BlogHeader to use usePathname for route handling - Replaced useRouter with usePathname for improved routing in BlogHeader component. - Simplified page retrieval logic by directly using pathname for matching routes. * feat: add changelog for v0.8.3-rc1 release with new features and fixes - Introduced several enhancements including event-driven lazy tool loading, token usage tracking, and programmatic tool calling UI. - Added support for new models and providers, including Claude Opus 4.6 and Moonshot. - Implemented various bug fixes and improvements for better user experience and performance. * chore: nextjs artifact * first draft roadmap * feat: enhance BlogPage with Open Graph image support and layout improvements - Added support for Open Graph images in blog entries to improve visual presentation. - Refactored article layout for better structure and readability, including adjustments to the display of metadata and content. - Updated styles for improved user experience during hover interactions. * feat: enhance BlogPage with date formatting and layout adjustments - Added a new dateFormatted field to entries for improved date display. - Implemented a date formatter for consistent date presentation. - Refactored article layout to use a grid system for better responsiveness. - Updated styles for article links and metadata for enhanced user experience. * feat: add responsive image sizes to BlogPage for improved layout - Included sizes attribute for Open Graph images to enhance responsiveness on different screen sizes. * feat: update feature titles and descriptions for clarity - Changed titles for "Forking Messages and Conversations" to "Forking Chats" and "Memory" to "User Memory" for better alignment with functionality. - Updated descriptions for "Message Search" and "Upload as Text" to enhance understanding of features. * chore: update configuration version to 1.3.4 across multiple documentation files - Updated the version number in `librechat.yaml` examples to reflect the latest release (1.3.4) in various configuration and feature documentation files. * feat: enhance User Memory documentation for clarity and detail - Updated the description to clarify that User Memory is a key/value store that operates on every chat request. - Added a callout to distinguish between key/value storage and conversation memory. - Expanded on the functionality of the memory agent, including its execution process and user control features. - Introduced a section on future improvements for the memory agent's efficiency and relevance. * feat: update title and description for NGINX documentation - Changed the title from "Secure Deployment with Nginx" to "NGINX" for brevity. - Updated the description to provide a clearer overview of the guide's purpose in securing LibreChat deployment with Nginx as a reverse proxy and HTTPS. * feat: update 2026 roadmap with key accomplishments and future plans - Celebrated LibreChat's 3rd anniversary with a summary of achievements from 2025, including growth in GitHub stars and community engagement. - Clarified the timeline for open-sourcing the Code Interpreter API by the end of Q1. - Revised notes on the v1 Admin Panel's core capabilities and community-driven items for better clarity and detail. * feat: enhance blog and author components with Open Graph image support - Added optional `ogImagePosition` field to blog entries for better image placement control. - Updated BlogPage and individual post pages to utilize the new `ogImagePosition` for responsive image styling. - Improved Author component to conditionally render author images based on availability. - Updated 2026 roadmap blog post with a new Open Graph image and position for enhanced visual appeal. * feat: enhance CardComponent with icon support and layout improvements - Added optional `icon` prop to CardComponent for better visual representation. - Updated CardComponent layout to include icon alongside title and children. - Improved styling for CardComponent and CardsBase for enhanced responsiveness and user experience. * feat: update 2026 roadmap with detailed focus areas and community-driven items - Added sections for Q1 and Q2 focus areas, outlining major initiatives like Dynamic Context and Admin Panel. - Enhanced clarity on community-driven items and their prioritization based on GitHub reactions. - Included hiring information to attract full-stack developers for ongoing project support. - Improved overall structure and readability of the roadmap content. * fix: improve icon styling in CardCompat component for better responsiveness - Updated icon container styling to ensure consistent height and width for SVG icons. - Enhanced layout of CardCompat to maintain visual integrity across different screen sizes. * chore: update .gitignore to include next-env.d.ts for TypeScript support * fix: correct import statement formatting in next-env.d.ts for consistency * fix: refine wording in 2026 roadmap for clarity - Updated the description of agentic workflows to emphasize a lean approach to context pulling. - Enhanced overall readability of the section on Dynamic Context. * feat: expand Admin Panel section in 2026 roadmap with detailed capabilities - Added comprehensive descriptions of the Admin Panel's core functionalities, including GUI for configuration, configuration profiles, group and role management, and access controls. - Clarified the development approach for the Admin Panel, emphasizing ongoing iteration and community involvement. - Updated note on the Admin Panel's prioritization and requirements following the ClickHouse acquisition. * feat: add TrackedLink component for enhanced analytics tracking - Introduced a new TrackedLink component that integrates Vercel analytics to track user interactions with links. - The component allows for customizable link properties while ensuring tracking of clicks with relevant metadata. - Updated CardCompat to utilize the new TrackedLink for improved user engagement tracking. * feat: enhance blog post layout and introduce TrackedAnchor component for link tracking - Wrapped the InlineTOC component in a div for improved spacing in blog posts. - Added a new TrackedAnchor component to facilitate link tracking with Vercel analytics, allowing for customizable anchor elements. - Updated mdx-components to utilize TrackedAnchor for enhanced link interaction tracking. * feat: update TrackedLink and TrackedAnchor components for external link handling - Enhanced the TrackedLink component to differentiate between internal and external links, using Next.js Link for internal navigation. - Introduced a utility function to determine if a link is external, improving tracking accuracy. - Updated TrackedAnchor to utilize the same external link handling logic for consistency in link tracking. * feat: add uncaught exception handling section to dotenv configuration documentation - Introduced a new section on uncaught exception handling, explaining how to override the default behavior to keep the app running after exceptions. - Added an option table detailing the `CONTINUE_ON_UNCAUGHT_EXCEPTION` configuration. - Included a warning callout advising against using this feature in production environments. * feat: add ESLint rule for unused variables in TypeScript - Introduced a new ESLint rule to enforce the handling of unused variables, allowing for specific patterns to be ignored. - This enhancement aims to improve code quality by ensuring that developers are alerted to potentially unnecessary variables while maintaining flexibility in naming conventions. * fix: update copyright year in LICENSE file to 2026 * feat: update footer menu link and add 2026 roadmap blog post - Changed the roadmap link in the FooterMenu component to point to the new blog post. - Introduced a new blog post detailing the 2026 roadmap for LibreChat, outlining key features and focus areas for the upcoming year. - Updated the import statement in next-env.d.ts for consistency with the new types directory. * fix: update import path in next-env.d.ts and add comment block in agents.mdx - Changed the import statement in next-env.d.ts to reference the new development types directory. - Added a comment block in agents.mdx to indicate that the Programmatic Tool Calling feature is in private beta. * fix: remove unused ESLint disable comment in context.tsx * chore: update blog
428 lines
15 KiB
Plaintext
428 lines
15 KiB
Plaintext
---
|
|
title: "Example"
|
|
icon: FileCode
|
|
---
|
|
|
|
## Clean Example
|
|
|
|
<Callout type="example" title="Example" collapsible>
|
|
|
|
This example config includes all documented endpoints (Except Azure, LiteLLM, MLX, and Ollama, which all require additional configurations)
|
|
|
|
```yaml filename="librechat.yaml"
|
|
version: 1.3.4
|
|
|
|
cache: true
|
|
|
|
interface:
|
|
# MCP Servers UI configuration
|
|
mcpServers:
|
|
placeholder: 'MCP Servers'
|
|
|
|
# Privacy policy settings
|
|
privacyPolicy:
|
|
externalUrl: 'https://librechat.ai/privacy-policy'
|
|
openNewTab: true
|
|
|
|
# Terms of service
|
|
termsOfService:
|
|
externalUrl: 'https://librechat.ai/tos'
|
|
openNewTab: true
|
|
|
|
registration:
|
|
socialLogins: ["discord", "facebook", "github", "google", "openid"]
|
|
|
|
endpoints:
|
|
custom:
|
|
|
|
# Anyscale
|
|
- name: "Anyscale"
|
|
apiKey: "${ANYSCALE_API_KEY}"
|
|
baseURL: "https://api.endpoints.anyscale.com/v1"
|
|
models:
|
|
default: [
|
|
"meta-llama/Llama-2-7b-chat-hf",
|
|
]
|
|
fetch: true
|
|
titleConvo: true
|
|
titleModel: "meta-llama/Llama-2-7b-chat-hf"
|
|
summarize: false
|
|
summaryModel: "meta-llama/Llama-2-7b-chat-hf"
|
|
modelDisplayLabel: "Anyscale"
|
|
|
|
# APIpie
|
|
- name: "APIpie"
|
|
apiKey: "${APIPIE_API_KEY}"
|
|
baseURL: "https://apipie.ai/v1/"
|
|
models:
|
|
default: [
|
|
"gpt-4",
|
|
"gpt-4-turbo",
|
|
"gpt-3.5-turbo",
|
|
"claude-3-opus",
|
|
"claude-3-sonnet",
|
|
"claude-3-haiku",
|
|
"llama-3-70b-instruct",
|
|
"llama-3-8b-instruct",
|
|
"gemini-pro-1.5",
|
|
"gemini-pro",
|
|
"mistral-large",
|
|
"mistral-medium",
|
|
"mistral-small",
|
|
"mistral-tiny",
|
|
"mixtral-8x22b",
|
|
]
|
|
fetch: false
|
|
titleConvo: true
|
|
titleModel: "gpt-3.5-turbo"
|
|
dropParams: ["stream"]
|
|
|
|
#cohere
|
|
- name: "cohere"
|
|
apiKey: "${COHERE_API_KEY}"
|
|
baseURL: "https://api.cohere.ai/v1"
|
|
models:
|
|
default: ["command-r","command-r-plus","command-light","command-light-nightly","command","command-nightly"]
|
|
fetch: false
|
|
modelDisplayLabel: "cohere"
|
|
titleModel: "command"
|
|
dropParams: ["stop", "user", "frequency_penalty", "presence_penalty", "temperature", "top_p"]
|
|
|
|
# Fireworks
|
|
- name: "Fireworks"
|
|
apiKey: "${FIREWORKS_API_KEY}"
|
|
baseURL: "https://api.fireworks.ai/inference/v1"
|
|
models:
|
|
default: [
|
|
"accounts/fireworks/models/mixtral-8x7b-instruct",
|
|
]
|
|
fetch: true
|
|
titleConvo: true
|
|
titleModel: "accounts/fireworks/models/llama-v2-7b-chat"
|
|
summarize: false
|
|
summaryModel: "accounts/fireworks/models/llama-v2-7b-chat"
|
|
modelDisplayLabel: "Fireworks"
|
|
dropParams: ["user"]
|
|
|
|
# groq
|
|
- name: "groq"
|
|
apiKey: "${GROQ_API_KEY}"
|
|
baseURL: "https://api.groq.com/openai/v1/"
|
|
models:
|
|
default: [
|
|
"llama2-70b-4096",
|
|
"llama3-70b-8192",
|
|
"llama3-8b-8192",
|
|
"mixtral-8x7b-32768",
|
|
"gemma-7b-it",
|
|
]
|
|
fetch: false
|
|
titleConvo: true
|
|
titleModel: "mixtral-8x7b-32768"
|
|
modelDisplayLabel: "groq"
|
|
|
|
# Mistral AI API
|
|
- name: "Mistral"
|
|
apiKey: "${MISTRAL_API_KEY}"
|
|
baseURL: "https://api.mistral.ai/v1"
|
|
models:
|
|
default: [
|
|
"mistral-tiny",
|
|
"mistral-small",
|
|
"mistral-medium",
|
|
"mistral-large-latest"
|
|
]
|
|
fetch: true
|
|
titleConvo: true
|
|
titleModel: "mistral-tiny"
|
|
modelDisplayLabel: "Mistral"
|
|
dropParams: ["stop", "user", "frequency_penalty", "presence_penalty"]
|
|
|
|
# OpenRouter.ai
|
|
- name: "OpenRouter"
|
|
apiKey: "${OPENROUTER_KEY}"
|
|
baseURL: "https://openrouter.ai/api/v1"
|
|
models:
|
|
default: ["openai/gpt-3.5-turbo"]
|
|
fetch: true
|
|
titleConvo: true
|
|
titleModel: "gpt-3.5-turbo"
|
|
summarize: false
|
|
summaryModel: "gpt-3.5-turbo"
|
|
modelDisplayLabel: "OpenRouter"
|
|
|
|
# Perplexity
|
|
- name: "Perplexity"
|
|
apiKey: "${PERPLEXITY_API_KEY}"
|
|
baseURL: "https://api.perplexity.ai/"
|
|
models:
|
|
default: [
|
|
"mistral-7b-instruct",
|
|
"sonar-small-chat",
|
|
"sonar-small-online",
|
|
"sonar-medium-chat",
|
|
"sonar-medium-online"
|
|
]
|
|
fetch: false # fetching list of models is not supported
|
|
titleConvo: true
|
|
titleModel: "sonar-medium-chat"
|
|
summarize: false
|
|
summaryModel: "sonar-medium-chat"
|
|
dropParams: ["stop", "frequency_penalty"]
|
|
modelDisplayLabel: "Perplexity"
|
|
|
|
# ShuttleAI API
|
|
- name: "ShuttleAI"
|
|
apiKey: "${SHUTTLEAI_API_KEY}"
|
|
baseURL: "https://api.shuttleai.app/v1"
|
|
models:
|
|
default: [
|
|
"shuttle-1", "shuttle-turbo"
|
|
]
|
|
fetch: true
|
|
titleConvo: true
|
|
titleModel: "gemini-pro"
|
|
summarize: false
|
|
summaryModel: "llama-summarize"
|
|
modelDisplayLabel: "ShuttleAI"
|
|
dropParams: ["user"]
|
|
|
|
# together.ai
|
|
- name: "together.ai"
|
|
apiKey: "${TOGETHERAI_API_KEY}"
|
|
baseURL: "https://api.together.xyz"
|
|
models:
|
|
default: [
|
|
"zero-one-ai/Yi-34B-Chat",
|
|
"Austism/chronos-hermes-13b",
|
|
"DiscoResearch/DiscoLM-mixtral-8x7b-v2",
|
|
"Gryphe/MythoMax-L2-13b",
|
|
"lmsys/vicuna-13b-v1.5",
|
|
"lmsys/vicuna-7b-v1.5",
|
|
"lmsys/vicuna-13b-v1.5-16k",
|
|
"codellama/CodeLlama-13b-Instruct-hf",
|
|
"codellama/CodeLlama-34b-Instruct-hf",
|
|
"codellama/CodeLlama-70b-Instruct-hf",
|
|
"codellama/CodeLlama-7b-Instruct-hf",
|
|
"togethercomputer/llama-2-13b-chat",
|
|
"togethercomputer/llama-2-70b-chat",
|
|
"togethercomputer/llama-2-7b-chat",
|
|
"NousResearch/Nous-Capybara-7B-V1p9",
|
|
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
|
|
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT",
|
|
"NousResearch/Nous-Hermes-Llama2-70b",
|
|
"NousResearch/Nous-Hermes-llama-2-7b",
|
|
"NousResearch/Nous-Hermes-Llama2-13b",
|
|
"NousResearch/Nous-Hermes-2-Yi-34B",
|
|
"openchat/openchat-3.5-1210",
|
|
"Open-Orca/Mistral-7B-OpenOrca",
|
|
"togethercomputer/Qwen-7B-Chat",
|
|
"snorkelai/Snorkel-Mistral-PairRM-DPO",
|
|
"togethercomputer/alpaca-7b",
|
|
"togethercomputer/falcon-40b-instruct",
|
|
"togethercomputer/falcon-7b-instruct",
|
|
"togethercomputer/GPT-NeoXT-Chat-Base-20B",
|
|
"togethercomputer/Llama-2-7B-32K-Instruct",
|
|
"togethercomputer/Pythia-Chat-Base-7B-v0.16",
|
|
"togethercomputer/RedPajama-INCITE-Chat-3B-v1",
|
|
"togethercomputer/RedPajama-INCITE-7B-Chat",
|
|
"togethercomputer/StripedHyena-Nous-7B",
|
|
"Undi95/ReMM-SLERP-L2-13B",
|
|
"Undi95/Toppy-M-7B",
|
|
"WizardLM/WizardLM-13B-V1.2",
|
|
"garage-bAInd/Platypus2-70B-instruct",
|
|
"mistralai/Mistral-7B-Instruct-v0.1",
|
|
"mistralai/Mistral-7B-Instruct-v0.2",
|
|
"mistralai/Mixtral-8x7B-Instruct-v0.1",
|
|
"teknium/OpenHermes-2-Mistral-7B",
|
|
"teknium/OpenHermes-2p5-Mistral-7B",
|
|
"upstage/SOLAR-10.7B-Instruct-v1.0"
|
|
]
|
|
fetch: false # fetching list of models is not supported
|
|
titleConvo: true
|
|
titleModel: "togethercomputer/llama-2-7b-chat"
|
|
summarize: false
|
|
summaryModel: "togethercomputer/llama-2-7b-chat"
|
|
modelDisplayLabel: "together.ai"
|
|
```
|
|
|
|
</Callout>
|
|
|
|
## Example with Comments
|
|
|
|
This example configuration file sets up LibreChat with detailed options across several key areas:
|
|
|
|
- **Caching**: Enabled to improve performance.
|
|
- **File Handling**:
|
|
- **File Strategy**: Commented out but hints at possible integration with Firebase for file storage.
|
|
- **File Configurations**: Customizes file upload limits and allowed MIME types for different endpoints, including a global server file size limit and a specific limit for user avatar images.
|
|
- **Rate Limiting**: Defines thresholds for the maximum number of file uploads allowed per IP and user within a specified time window, aiming to prevent abuse.
|
|
- **Registration**:
|
|
- Allows registration from specified social login providers and email domains, enhancing security and user management.
|
|
- **Endpoints**:
|
|
- **Assistants**: Configures the assistants' endpoint with a polling interval and a timeout for operations, and provides an option to disable the builder interface.
|
|
- **Custom Endpoints**:
|
|
- Configures two external AI service endpoints, Mistral and OpenRouter, including API keys, base URLs, model handling, and specific feature toggles like conversation titles, summarization, and parameter adjustments.
|
|
- For Mistral, it enables dynamic model fetching, applies additional parameters for safe prompts, and explicitly drops unsupported parameters.
|
|
- For OpenRouter, it sets up a basic configuration without dynamic model fetching and specifies a model for conversation titles.
|
|
|
|
<Callout type="example" title="Commented Example" collapsible>
|
|
|
|
```yaml filename="librechat.yaml"
|
|
# For more information, see the Configuration Guide:
|
|
# https://www.librechat.ai/docs/configuration/librechat_yaml
|
|
|
|
# Configuration version (required)
|
|
version: 1.3.4
|
|
|
|
# Cache settings: Set to true to enable caching
|
|
cache: true
|
|
|
|
# Custom interface configuration
|
|
interface:
|
|
# MCP Servers UI configuration
|
|
mcpServers:
|
|
placeholder: 'MCP Servers'
|
|
|
|
# Privacy policy settings
|
|
privacyPolicy:
|
|
externalUrl: 'https://librechat.ai/privacy-policy'
|
|
openNewTab: true
|
|
|
|
# Terms of service
|
|
termsOfService:
|
|
externalUrl: 'https://librechat.ai/tos'
|
|
openNewTab: true
|
|
|
|
# Example Registration Object Structure (optional)
|
|
registration:
|
|
socialLogins: ['github', 'google', 'discord', 'openid', 'facebook']
|
|
# allowedDomains:
|
|
# - "gmail.com"
|
|
|
|
# rateLimits:
|
|
# fileUploads:
|
|
# ipMax: 100
|
|
# ipWindowInMinutes: 60 # Rate limit window for file uploads per IP
|
|
# userMax: 50
|
|
# userWindowInMinutes: 60 # Rate limit window for file uploads per user
|
|
# conversationsImport:
|
|
# ipMax: 100
|
|
# ipWindowInMinutes: 60 # Rate limit window for conversation imports per IP
|
|
# userMax: 50
|
|
# userWindowInMinutes: 60 # Rate limit window for conversation imports per user
|
|
|
|
# Definition of custom endpoints
|
|
endpoints:
|
|
# assistants:
|
|
# disableBuilder: false # Disable Assistants Builder Interface by setting to `true`
|
|
# pollIntervalMs: 750 # Polling interval for checking assistant updates
|
|
# timeoutMs: 180000 # Timeout for assistant operations
|
|
# # Should only be one or the other, either `supportedIds` or `excludedIds`
|
|
# supportedIds: ["asst_supportedAssistantId1", "asst_supportedAssistantId2"]
|
|
# # excludedIds: ["asst_excludedAssistantId"]
|
|
# Only show assistants that the user created or that were created externally (e.g. in Assistants playground).
|
|
# # privateAssistants: false # Does not work with `supportedIds` or `excludedIds`
|
|
# # (optional) Models that support retrieval, will default to latest known OpenAI models that support the feature
|
|
# retrievalModels: ["gpt-4-turbo-preview"]
|
|
# # (optional) Assistant Capabilities available to all users. Omit the ones you wish to exclude. Defaults to list below.
|
|
# capabilities: ["code_interpreter", "retrieval", "actions", "tools", "image_vision"]
|
|
custom:
|
|
# Groq Example
|
|
- name: 'groq'
|
|
apiKey: '${GROQ_API_KEY}'
|
|
baseURL: 'https://api.groq.com/openai/v1/'
|
|
models:
|
|
default: [
|
|
"llama3-70b-8192",
|
|
"llama3-8b-8192",
|
|
"llama2-70b-4096",
|
|
"mixtral-8x7b-32768",
|
|
"gemma-7b-it",
|
|
]
|
|
fetch: false
|
|
titleConvo: true
|
|
titleModel: 'mixtral-8x7b-32768'
|
|
modelDisplayLabel: 'groq'
|
|
|
|
# Mistral AI Example
|
|
- name: 'Mistral' # Unique name for the endpoint
|
|
# For `apiKey` and `baseURL`, you can use environment variables that you define.
|
|
# recommended environment variables:
|
|
apiKey: '${MISTRAL_API_KEY}'
|
|
baseURL: 'https://api.mistral.ai/v1'
|
|
|
|
# Models configuration
|
|
models:
|
|
# List of default models to use. At least one value is required.
|
|
default: ['mistral-tiny', 'mistral-small', 'mistral-medium']
|
|
# Fetch option: Set to true to fetch models from API.
|
|
fetch: true # Defaults to false.
|
|
|
|
# Optional configurations
|
|
|
|
# Title Conversation setting
|
|
titleConvo: true # Set to true to enable title conversation
|
|
|
|
# Title Method: Choose between "completion" or "functions".
|
|
# titleMethod: "completion" # Defaults to "completion" if omitted.
|
|
|
|
# Title Model: Specify the model to use for titles.
|
|
titleModel: 'mistral-tiny' # Defaults to "gpt-3.5-turbo" if omitted.
|
|
|
|
# Summarize setting: Set to true to enable summarization.
|
|
# summarize: false
|
|
|
|
# Summary Model: Specify the model to use if summarization is enabled.
|
|
# summaryModel: "mistral-tiny" # Defaults to "gpt-3.5-turbo" if omitted.
|
|
|
|
# The label displayed for the AI model in messages.
|
|
modelDisplayLabel: 'Mistral' # Default is "AI" when not set.
|
|
|
|
# Add additional parameters to the request. Default params will be overwritten.
|
|
# addParams:
|
|
# safe_prompt: true # This field is specific to Mistral AI: https://docs.mistral.ai/api/
|
|
|
|
# Drop Default params parameters from the request. See default params in guide linked below.
|
|
# NOTE: For Mistral, it is necessary to drop the following parameters or you will encounter a 422 Error:
|
|
dropParams: ['stop', 'user', 'frequency_penalty', 'presence_penalty']
|
|
|
|
# OpenRouter Example
|
|
- name: 'OpenRouter'
|
|
# For `apiKey` and `baseURL`, you can use environment variables that you define.
|
|
# recommended environment variables:
|
|
# Known issue: you should not use `OPENROUTER_API_KEY` as it will then override the `openAI` endpoint to use OpenRouter as well.
|
|
apiKey: '${OPENROUTER_KEY}'
|
|
baseURL: 'https://openrouter.ai/api/v1'
|
|
models:
|
|
default: ['meta-llama/llama-3-70b-instruct']
|
|
fetch: true
|
|
titleConvo: true
|
|
titleModel: 'meta-llama/llama-3-70b-instruct'
|
|
# Recommended: Drop the stop parameter from the request as Openrouter models use a variety of stop tokens.
|
|
dropParams: ['stop']
|
|
modelDisplayLabel: 'OpenRouter'
|
|
|
|
# fileConfig:
|
|
# endpoints:
|
|
# assistants:
|
|
# fileLimit: 5
|
|
# fileSizeLimit: 10 # Maximum size for an individual file in MB
|
|
# totalSizeLimit: 50 # Maximum total size for all files in a single request in MB
|
|
# supportedMimeTypes:
|
|
# - "image/.*"
|
|
# - "application/pdf"
|
|
# openAI:
|
|
# disabled: true # Disables file uploading to the OpenAI endpoint
|
|
# default:
|
|
# totalSizeLimit: 20
|
|
# YourCustomEndpointName:
|
|
# fileLimit: 2
|
|
# fileSizeLimit: 5
|
|
# serverFileSizeLimit: 100 # Global server file size limit in MB
|
|
# avatarSizeLimit: 2 # Limit for user avatar image size in MB
|
|
# See the Custom Configuration Guide for more information:
|
|
# https://www.librechat.ai/docs/configuration/librechat_yaml
|
|
```
|
|
|
|
</Callout> |