mirror of
https://github.com/LibreChat-AI/librechat.ai.git
synced 2026-03-27 02:38:32 +07:00
* ✨ v0.8.3-rc2
- Added new `document_parser` OCR strategy for local text extraction from various document formats.
- Introduced `thinkingLevel` parameter for Gemini 3+ models to control thinking effort.
- Added `reasoning_effort` parameter for Bedrock models to configure reasoning capabilities.
- Enabled document uploads for Bedrock endpoints.
- Updated default model lists to include new Gemini models.
- Changed date template variable format for improved readability.
- Updated OpenRouter reasoning configuration to align with API changes.
- Bumped configuration version to 1.3.5 across multiple documentation files.
* docs: enhance `document_parser` functionality and update OCR configuration details
- Updated the `document_parser` to run automatically for agent file uploads without requiring an `ocr` configuration, providing seamless text extraction from supported document types.
- Added fallback logic for the `document_parser` when a configured OCR strategy fails, ensuring text extraction remains effective.
- Expanded documentation to clarify the automatic operation of the `document_parser` and its limitations regarding image-based documents.
* chore: update changelog for v0.8.3-rc2
- Added new features including credential variables for DB-sourced MCP servers, updates for the `gemini-3.1-flash-lite-preview` window and pricing, and the introduction of gpt-5.3 context window and pricing.
- Enhanced agent editor functionality by allowing duplication of agents.
- Implemented fixes for OIDC logout, post-auth navigation, and URL query parameter preservation.
- Updated various dependencies and improved internationalization with new translations.
* docs: add credential variables support for UI-created MCP servers
- Introduced a new section detailing how users can provide their own API keys when adding MCP servers through the UI.
- Explained the creation of `customUserVars` for user-provided API keys and the security measures in place to prevent unauthorized access to sensitive data.
- Updated documentation to enhance clarity on the configuration process for MCP servers.
* chore: update changelog for v0.8.3-rc2
- Added new features including expanded toolkit definitions for child tools in event-driven mode and consistent Mermaid theming for inline and artifact renderers.
- Updated the Agent Tool with new SVG assets for improved visual representation.
* chore: update changelog for v1.3.5
- Updated release date to 2026-03-04.
- Adjusted date template variable format to reflect the new date and include named weekdays.
- Updated OpenRouter reasoning configuration to align with API changes.
271 lines
7.7 KiB
Plaintext
271 lines
7.7 KiB
Plaintext
---
|
|
title: "Memory Configuration"
|
|
icon: Brain
|
|
---
|
|
|
|
## Overview
|
|
|
|
The `memory` object allows you to configure conversation memory and personalization features for the application. This configuration controls how the system remembers and personalizes conversations, including token limits, message context windows, and agent-based memory processing.
|
|
|
|
## Example
|
|
|
|
```yaml filename="memory"
|
|
memory:
|
|
disabled: false
|
|
validKeys: ["user_preferences", "conversation_context", "personal_info"]
|
|
tokenLimit: 2000
|
|
charLimit: 10000
|
|
personalize: true
|
|
messageWindowSize: 5
|
|
agent:
|
|
provider: "openAI"
|
|
model: "gpt-4"
|
|
instructions: "You are a helpful assistant that remembers user preferences and context."
|
|
model_parameters:
|
|
temperature: 0.7
|
|
max_tokens: 1000
|
|
```
|
|
|
|
## disabled
|
|
|
|
<OptionTable
|
|
options={[
|
|
['disabled', 'Boolean', 'Disables memory functionality when set to true. When disabled, the system will not store or use conversation memory.', 'disabled: false'],
|
|
]}
|
|
/>
|
|
|
|
**Default:** `false`
|
|
|
|
```yaml filename="memory / disabled"
|
|
memory:
|
|
disabled: true
|
|
```
|
|
|
|
## validKeys
|
|
|
|
<OptionTable
|
|
options={[
|
|
['validKeys', 'Array of Strings', 'Specifies which keys are valid for memory storage. This helps control what types of information can be stored in memory.', 'validKeys: ["user_name", "preferences", "context"]'],
|
|
]}
|
|
/>
|
|
|
|
**Default:** No restriction (all keys are valid)
|
|
|
|
```yaml filename="memory / validKeys"
|
|
memory:
|
|
validKeys:
|
|
- "user_preferences"
|
|
- "conversation_context"
|
|
- "personal_information"
|
|
- "learned_facts"
|
|
```
|
|
|
|
## tokenLimit
|
|
|
|
<OptionTable
|
|
options={[
|
|
['tokenLimit', 'Number', 'Sets the maximum number of tokens that can be used for memory storage and processing.', 'tokenLimit: 2000'],
|
|
]}
|
|
/>
|
|
|
|
**Default:** No limit
|
|
|
|
```yaml filename="memory / tokenLimit"
|
|
memory:
|
|
tokenLimit: 2000
|
|
```
|
|
|
|
## charLimit
|
|
|
|
<OptionTable
|
|
options={[
|
|
['charLimit', 'Number', 'Sets the maximum number of characters allowed for individual memory entries. This prevents oversized memory payloads that could impact performance or exceed API limits.', 'charLimit: 10000'],
|
|
]}
|
|
/>
|
|
|
|
**Default:** `10000`
|
|
|
|
```yaml filename="memory / charLimit"
|
|
memory:
|
|
charLimit: 10000
|
|
```
|
|
|
|
## personalize
|
|
|
|
<OptionTable
|
|
options={[
|
|
['personalize', 'Boolean', 'When set to true, gives users the ability to opt in or out of using memory features. Users can toggle memory on/off in their chat interface. When false, memory features are completely disabled.', 'personalize: true'],
|
|
]}
|
|
/>
|
|
|
|
**Default:** `true`
|
|
|
|
```yaml filename="memory / personalize"
|
|
memory:
|
|
personalize: false
|
|
```
|
|
|
|
## messageWindowSize
|
|
|
|
<OptionTable
|
|
options={[
|
|
['messageWindowSize', 'Number', 'Specifies the number of recent messages to include in the memory context window.', 'messageWindowSize: 5'],
|
|
]}
|
|
/>
|
|
|
|
**Default:** `5`
|
|
|
|
```yaml filename="memory / messageWindowSize"
|
|
memory:
|
|
messageWindowSize: 10
|
|
```
|
|
|
|
## agent
|
|
|
|
<OptionTable
|
|
options={[
|
|
['agent', 'Object | Union', 'Configures the agent responsible for memory processing. Can be either a reference to an existing agent by ID or a complete agent configuration.', 'agent: { provider: "openAI", model: "gpt-4" }'],
|
|
]}
|
|
/>
|
|
|
|
The `agent` field supports two different configuration formats:
|
|
|
|
### Agent by ID
|
|
|
|
When you have a pre-configured agent, you can reference it by its ID:
|
|
|
|
```yaml filename="memory / agent (by ID)"
|
|
memory:
|
|
agent:
|
|
id: "memory-agent-001"
|
|
```
|
|
|
|
### Custom Agent Configuration
|
|
|
|
For more control, you can define a complete agent configuration:
|
|
|
|
```yaml filename="memory / agent (custom)"
|
|
memory:
|
|
agent:
|
|
provider: "openAI"
|
|
model: "gpt-4"
|
|
instructions: "You are a memory assistant that helps maintain conversation context and user preferences."
|
|
model_parameters:
|
|
temperature: 0.3
|
|
max_tokens: 1500
|
|
top_p: 0.9
|
|
```
|
|
|
|
#### Agent Configuration Fields
|
|
|
|
When using custom agent configuration, the following fields are available:
|
|
|
|
**provider** (required)
|
|
<OptionTable
|
|
options={[
|
|
['provider', 'String', 'Specifies the AI provider for the memory agent. Can be a built-in provider (e.g., "openAI", "anthropic", "google") or a custom endpoint name.', 'provider: "openAI"'],
|
|
]}
|
|
/>
|
|
|
|
**model** (required)
|
|
<OptionTable
|
|
options={[
|
|
['model', 'String', 'Specifies the model to use for memory processing.', 'model: "gpt-4"'],
|
|
]}
|
|
/>
|
|
|
|
**instructions** (optional)
|
|
<OptionTable
|
|
options={[
|
|
['instructions', 'String', 'Custom instructions that replace the default instructions for when to set and/or delete memory. Should mainly be used when using validKeys that require specific information handling.', 'instructions: "Only store user preferences and facts when explicitly mentioned."'],
|
|
]}
|
|
/>
|
|
|
|
**model_parameters** (optional)
|
|
<OptionTable
|
|
options={[
|
|
['model_parameters', 'Object', 'Additional parameters to pass to the model for fine-tuning its behavior.', 'model_parameters: { temperature: 0.7 }'],
|
|
]}
|
|
/>
|
|
|
|
## Complete Configuration Example
|
|
|
|
Here's a comprehensive example showing all memory configuration options:
|
|
|
|
```yaml filename="librechat.yaml"
|
|
version: 1.3.5
|
|
cache: true
|
|
|
|
memory:
|
|
disabled: false
|
|
validKeys:
|
|
- "user_preferences"
|
|
- "conversation_context"
|
|
- "learned_facts"
|
|
- "personal_information"
|
|
tokenLimit: 3000
|
|
charLimit: 10000
|
|
personalize: true
|
|
messageWindowSize: 8
|
|
agent:
|
|
provider: "openAI"
|
|
model: "gpt-4"
|
|
instructions: |
|
|
Store memory using only the specified validKeys. For user_preferences: save
|
|
explicitly stated preferences about communication style, topics of interest,
|
|
or workflow preferences. For conversation_context: save important facts or
|
|
ongoing projects mentioned. For learned_facts: save objective information
|
|
about the user. For personal_information: save only what the user explicitly
|
|
shares about themselves. Delete outdated or incorrect information promptly.
|
|
model_parameters:
|
|
temperature: 0.2
|
|
max_tokens: 2000
|
|
top_p: 0.8
|
|
frequency_penalty: 0.1
|
|
```
|
|
|
|
## Using Custom Endpoints
|
|
|
|
The memory feature supports custom endpoints. When using a custom endpoint, the `provider` field should match the custom endpoint's `name` exactly. Custom headers with environment variables and user placeholders are properly resolved.
|
|
|
|
```yaml filename="librechat.yaml with custom endpoint for memory"
|
|
|
|
endpoints:
|
|
custom:
|
|
- name: 'Custom Memory Endpoint'
|
|
apiKey: 'dummy'
|
|
baseURL: 'https://api.gateway.ai/v1'
|
|
headers:
|
|
x-gateway-api-key: '${GATEWAY_API_KEY}'
|
|
x-gateway-virtual-key: '${GATEWAY_OPENAI_VIRTUAL_KEY}'
|
|
X-User-Identifier: '{{LIBRECHAT_USER_EMAIL}}'
|
|
X-Application-Identifier: 'LibreChat - Test'
|
|
api-key: '${TEST_CUSTOM_API_KEY}'
|
|
models:
|
|
default:
|
|
- 'gpt-4o-mini'
|
|
- 'gpt-4o'
|
|
fetch: false
|
|
|
|
memory:
|
|
disabled: false
|
|
tokenLimit: 3000
|
|
personalize: true
|
|
messageWindowSize: 10
|
|
agent:
|
|
provider: 'Custom Memory Endpoint'
|
|
model: 'gpt-4o-mini'
|
|
```
|
|
|
|
- See [Custom Endpoint Headers](/docs/configuration/librechat_yaml/object_structure/custom_endpoint#headers) for all available placeholders
|
|
|
|
## Notes
|
|
|
|
- Memory functionality enhances conversation continuity and personalization
|
|
- When `personalize` is true, users get a toggle in their chat interface to control memory usage
|
|
- Token limits help control memory usage and processing costs
|
|
- Valid keys provide granular control over what information can be stored
|
|
- Custom `instructions` replace default memory handling instructions and should be used with `validKeys`
|
|
- Agent configuration allows customization of memory processing behavior
|
|
- When disabled, all memory features are turned off regardless of other settings
|
|
- The message window size affects how much recent context is considered for memory updates |