📙 docs: missing env. vars, new config fields, bun lockfile, broken links (#366)

* docs: `temporaryChatRetention` to configure retention period for temporary chats to changelog

* docs: `clientImageResize` for automatic client-side image resizing before upload to changelog

* docs: Add rate limiting configurations for login, registration, import conversations, and forking

- Introduced detailed documentation for rate limiting features to prevent abuse, including limits on login attempts, registration, conversation imports, and conversation forking.
- Added configuration options for both IP and user-based limits across these features.
- Updated existing documentation to reflect new rate limiting capabilities and their implications for system security and resource management.

* docs: Fix typos in OpenID on-behalf flow configuration variables

* docs: Add GOOGLE_SERVICE_KEY_FILE environment variable for Vertex AI authentication

- Introduced the `GOOGLE_SERVICE_KEY_FILE` option in the dotenv documentation, allowing users to specify the path to their Google service account JSON key file.
- Updated the Google configuration documentation to include alternative methods for managing service account credentials, enhancing flexibility for users.

* chore: Update bun.lockb to reflect dependency changes

* docs: Update changelog for version 1.2.8 with enhancements to MCP server management and new features

- Enhanced MCP server management with new features including `serverInstructions`, user placeholder variables, `customUserVars`, and centralized `mcpServers` configuration.
- Improved connection status tracking and OAuth support for MCP servers.
- Added support for dynamic user field placeholders in custom endpoint headers.
- Updated `clientImageResize` and `temporaryChatRetention` configurations with detailed descriptions.
- Enhanced web search configuration with comprehensive Firecrawl scraper options.
- Improved Model Specs documentation with updates on parameter support.

* docs: Update OCR configuration documentation for Google Vertex AI support

- Added `vertexai_mistral_ocr` strategy option for OCR configuration, enabling the use of Mistral OCR models on Google Cloud Vertex AI.
- Updated documentation to include methods for providing Google Cloud service account credentials, including file path, URL, base64 encoded JSON, and raw JSON string.
- Enhanced existing sections to reflect the new strategy and its requirements, including automatic extraction of project ID and JWT authentication for secure access.
- Included example configurations for Google Vertex AI deployments in the documentation.

* docs: Enhance GOOGLE_SERVICE_KEY_FILE documentation for base64 encoding option

- Added support for providing the `GOOGLE_SERVICE_KEY_FILE` as a base64 encoded JSON string in the documentation.
- Updated existing sections to clarify the various methods for specifying Google Cloud service account credentials, improving user guidance for Vertex AI authentication.

* docs: Add `REDIS_KEY_PREFIX_VAR` to dotenv documentation for dynamic Redis key prefix

- Introduced `REDIS_KEY_PREFIX_VAR`, an environment variable for specifying a dynamic Redis key prefix, enhancing flexibility for cloud deployments.
- Updated documentation to reflect this new option and its use case, particularly in scenarios involving Kubernetes revision numbers.

* docs: Update changelog and configuration documentation for title generation enhancements

- Added detailed configuration options for title generation across all endpoints, including `titleMethod`, `titlePrompt`, `titlePromptTemplate`, and `titleEndpoint`.
- Updated existing documentation to reflect these new options and their usage, improving clarity on how to customize title generation.
- Included notes on shared endpoint settings applicable to various AI models, enhancing user guidance for configuration.

* fix: broken links

* chore: formatting

* chore: reorganize librechat.yaml sub sections
This commit is contained in:
Danny Avila
2025-07-23 12:06:27 -04:00
committed by GitHub
parent 8dc29fd0ef
commit e3475d8034
23 changed files with 698 additions and 78 deletions

View File

@@ -3,33 +3,80 @@
- Supports Azure-specific deployment configurations with custom endpoints
- Requires Azure deployment name to be specified in `mistralModel` field
- Compatible with Mistral OCR 2503 model available on Azure AI Foundry
- See [OCR Configuration](/docs/configuration/librechat_yaml/object_structure/ocr) for details
- Added `serverInstructions` to MCP Servers object structure:
- Controls whether server instructions are included in agent context when using MCP Server tools
- See: [MCP Servers Object Structure](/docs/configuration/librechat_yaml/object_structure/mcp_servers#serverinstructions)
- Added additional user field placeholders to [MCP Servers Object Structure](/docs/configuration/librechat_yaml/object_structure/mcp_servers):
- `{{LIBRECHAT_USER_ID}}`: Will be replaced with the current user's ID, enabling multi-user support.
- `{{LIBRECHAT_USER_*}}`: Dynamic user field placeholders. Replace `*` with the UPPERCASE version of any allowed field.
- See: **[`headers`](/docs/configuration/librechat_yaml/object_structure/mcp_servers#headers)** and **[`url`](/docs/configuration/librechat_yaml/object_structure/mcp_servers#url)** fields, which both support these placeholders.
- Added `customUserVars` to MCP Servers for per-user credentials:
- Allows users to securely provide their own API keys and credentials for MCP servers
- Configurable through UI via settings icon in tool selection or MCP Settings panel
- Ensures authentication details remain private in multi-user environments
- See: [MCP Servers Object Structure - customUserVars](/docs/configuration/librechat_yaml/object_structure/mcp_servers#customuservars)
- Added `mcpServers` to the `interface` configuration in `librechat.yaml`:
- Centralizes UI-related settings for Model Context Protocol (MCP) servers.
- This includes the `placeholder` option, which customizes the text shown in the server selection dropdown when no MCP is selected.
- See: [Interface Object Structure - mcpServers](/docs/configuration/librechat_yaml/object_structure/interface#mcp-servers)
- See [OCR Configuration](/docs/configuration/librechat_yaml/object_structure/ocr) for details
- Added `vertexai_mistral_ocr` strategy option for OCR configuration
- Enables use of Mistral OCR models deployed on Google Cloud Vertex AI
- Supports multiple methods for providing Google Cloud service account credentials:
- File path to service account JSON
- URL to fetch service account JSON
- Base64 encoded service account JSON
- Raw JSON string
- Automatically extracts project ID from service account credentials
- Uses JWT authentication for secure access to Vertex AI endpoints
- See [OCR Configuration](/docs/configuration/librechat_yaml/object_structure/ocr) for details
- Enhanced MCP (Model Context Protocol) server management with comprehensive new features:
- Added `serverInstructions` to MCP Servers object structure - controls whether server instructions are included in agent context when using MCP Server tools
- Added user placeholder variables support:
- `{{LIBRECHAT_USER_ID}}` - replaced with the current user's ID, enabling multi-user support
- `{{LIBRECHAT_USER_*}}` - dynamic user field placeholders (replace `*` with the UPPERCASE version of any allowed field)
- Supported in `headers` and `url` fields
- Added `customUserVars` for per-user credentials:
- Allows users to securely provide their own API keys and credentials for MCP servers
- Configurable through UI via settings icon in tool selection or MCP Settings panel
- Ensures authentication details remain private in multi-user environments
- Added `mcpServers` to the `interface` configuration:
- Centralizes UI-related settings for Model Context Protocol servers
- Includes `placeholder` option to customize text shown in server selection dropdown when no MCP is selected
- Enhanced server management with connection status tracking and OAuth support:
- Added dynamic status icons showing server state (connected, disconnected, OAuth required, error, initializing)
- Implemented server (re)initialization with OAuth flow support through dropdown and settings panel
- Added set/unset state tracking for `customUserVars`
- Unified server listing across MCPSelect and MCPPanel with consistent status display
- See [MCP Servers Object Structure](/docs/configuration/librechat_yaml/object_structure/mcp_servers) for complete details
- Added user placeholder variables support to Custom Endpoint Headers:
- Users can now use `{{LIBRECHAT_USER_ID}}`, `{{LIBRECHAT_USER_EMAIL}}`, and other user field placeholders in custom endpoint headers
- See: [Custom Endpoint Object Structure - Headers](/docs/configuration/librechat_yaml/object_structure/custom_endpoint#headers) for details
- Enhanced `webSearch` configuration with comprehensive Firecrawl scraper options
- Added detailed configuration options for Firecrawl scraper including formats, includeTags, excludeTags, headers, waitFor, timeout, maxAge, mobile, skipTlsVerification, parsePDF, removeBase64Images, blockAds, storeInCache, zeroDataRetention, onlyMainContent, location, and changeTrackingOptions
- Users can now use the same placeholder variables (`{{LIBRECHAT_USER_ID}}`, `{{LIBRECHAT_USER_EMAIL}}`, etc.) in custom endpoint headers
- See [Custom Endpoint Object Structure - Headers](/docs/configuration/librechat_yaml/object_structure/custom_endpoint#headers) for details
- Added `temporaryChatRetention` to interface configuration:
- Configures the retention period for temporary chats in hours (min: 1, max: 8760)
- Default retention is 720 hours (30 days) if not specified
- Can be set via environment variable `TEMP_CHAT_RETENTION_HOURS` or in `librechat.yaml`
- Applies to newly created temporary chats only; existing chats retain their original expiration
- See [Interface Object Structure - temporaryChatRetention](/docs/configuration/librechat_yaml/object_structure/interface#temporarychatretention) for details
- Added `clientImageResize` to `fileConfig` configuration:
- Enables automatic client-side image resizing before upload to prevent failures and optimize performance
- Configurable maximum dimensions (`maxWidth`/`maxHeight`) with aspect ratio preservation
- Adjustable JPEG/WebP compression quality (0.1-1.0) for file size optimization
- Supports output format selection between JPEG and WebP
- Helps reduce bandwidth usage and improve upload speeds
- See [File Config Object Structure - clientImageResize](/docs/configuration/librechat_yaml/object_structure/file_config#clientimageresize) for details
- Enhanced `webSearch` configuration with comprehensive Firecrawl scraper options:
- Added detailed configuration options for Firecrawl scraper including `formats`, `includeTags`, `excludeTags`, `headers`, `waitFor`, `timeout`, `maxAge`, `mobile`, `skipTlsVerification`, `parsePDF`, `removeBase64Images`, `blockAds`, `storeInCache`, `zeroDataRetention`, `onlyMainContent`, `location`, and `changeTrackingOptions`
- See [Web Search Configuration](/docs/configuration/librechat_yaml/object_structure/web_search) for details
- Improved [Model Specs documentation](/docs/configuration/librechat_yaml/object_structure/model_specs) with parameter support updates (disableStreaming, thinking, thinkingBudget, web_search, etc...)
- Enhanced MCP (Model Context Protocol) server management with connection status tracking and OAuth support
- Added dynamic status icons showing server state (connected, disconnected, OAuth required, error, initializing)
- Implemented server (re)initialization with OAuth flow support through dropdown and settings panel
- Added set/unset state tracking for customUserVars
- Unified server listing across MCPSelect and MCPPanel with consistent status display
- See [MCP Servers Configuration](/docs/configuration/librechat_yaml/object_structure/mcp_servers) for details
- Added title generation configuration with advanced options for all endpoints:
- Added `titleMethod` to control title generation strategy:
- `"completion"` (new default) - Uses standard completion API for broader LLM compatibility
- `"structured"` or `"functions"` (legacy) - Uses structured output/function calling (not all providers support this)
- Added `titlePrompt` for customizing the main title generation prompt:
- Must include `{convo}` placeholder for conversation content
- Allows full control over how the AI generates titles
- Added `titlePromptTemplate` for customizing conversation formatting:
- Must include `{input}` and `{output}` placeholders
- Default: `"User: {input}\nAI: {output}"`
- Controls how the conversation is formatted when inserted into `titlePrompt`
- Added `titleEndpoint` to use alternative endpoints for title generation:
- Supported values: `openAI`, `azureOpenAI`, `google`, `anthropic`, `bedrock`
- For custom endpoints, use the exact custom endpoint name as defined in configuration
- Allows using a different, potentially cheaper model/endpoint for titles
- All title configuration options are now available as [Shared Endpoint Settings](/docs/configuration/librechat_yaml/object_structure/shared_endpoint_settings)
- Can be configured globally using the `all:` endpoint definition
- See individual endpoint documentation for specific examples
- Improved [Model Specs documentation](/docs/configuration/librechat_yaml/object_structure/model_specs) with parameter support updates:
- Added support for `disableStreaming`, `thinking`, `thinkingBudget`, `web_search`, and other parameters

View File

@@ -14,7 +14,7 @@
- **LibreChat hit a major milestone and was featured on GitHub trending several days in a row!**
- Thank you all who checked out this project, there are still more features to come and in active development!
- **[Customizing endpoints is now possible (that follow OpenAI specs):](https://docs.librechat.ai/install/configuration/custom_config.html)**
- **[Customizing endpoints is now possible (that follow OpenAI specs):](/docs/configuration/librechat_yaml/ai_endpoints)**
![fd0d2307-008f-4e1d-b75b-4f141070ce71](https://user-images.githubusercontent.com/110412045/298197340-f8c71b96-c729-454a-a485-99fc2c4ef6ae.png)