mirror of
https://github.com/LibreChat-AI/librechat.ai.git
synced 2026-03-27 10:48:32 +07:00
🧠 feat: Memories + Documentation Guidelines (#322)
* feat: memories update * docs: Add LibreChat documentation rules and configuration update guidelines * docs: Update Node.js version requirements across documentation - Revised Node.js prerequisites to specify v20.19.0+ (or ^22.12.0 or >= 23.0.0) in README.md, get_started.mdx, npm.mdx, docker_linux.mdx, and nginx.mdx for clarity and compatibility with openid-client v6. * docs: Add DEBUG_OPENID_REQUESTS environment variable for enhanced OpenID debugging - Introduced a new environment variable, DEBUG_OPENID_REQUESTS, to enable detailed logging of OpenID request headers for better troubleshooting. - Updated documentation to include troubleshooting steps for OpenID Connect, emphasizing the use of the new logging feature. * docs: Update changelog dates for versions v1.2.6 and v1.2.7 * docs: Enhance memory configuration documentation - Updated the memory configuration section to clarify the default value for the personalize option. - Changed the provider name from "openai" to "openAI" for consistency. - Added notes regarding the provider field and valid model parameters. - Improved examples for memory agent configuration.
This commit is contained in:
146
.cursor/rules/librechat-documentation.mdc
Normal file
146
.cursor/rules/librechat-documentation.mdc
Normal file
@@ -0,0 +1,146 @@
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
# LibreChat Documentation Rules
|
||||
|
||||
## Config Version Updates
|
||||
|
||||
When updating the LibreChat config version (e.g., from v1.2.6 to v1.2.7), follow these steps:
|
||||
|
||||
### 1. Create Changelog Files
|
||||
|
||||
#### Main Changelog File
|
||||
Create: `pages/changelog/config_v{VERSION}.mdx`
|
||||
|
||||
Template:
|
||||
```mdx
|
||||
---
|
||||
date: YYYY/MM/DD
|
||||
title: ⚙️ Config v{VERSION}
|
||||
---
|
||||
|
||||
import { ChangelogHeader } from '@/components/changelog/ChangelogHeader'
|
||||
import Content from '@/components/changelog/content/config_v{VERSION}.mdx'
|
||||
|
||||
<ChangelogHeader />
|
||||
|
||||
---
|
||||
|
||||
<Content />
|
||||
```
|
||||
|
||||
#### Content File
|
||||
Create: `components/changelog/content/config_v{VERSION}.mdx`
|
||||
|
||||
Format:
|
||||
- Use bullet points starting with `-`
|
||||
- Group related changes together
|
||||
- Include links to detailed documentation using `[Feature Name](/docs/configuration/librechat_yaml/object_structure/{feature})`
|
||||
- Describe what was added/changed and its purpose
|
||||
- Keep descriptions concise but informative
|
||||
|
||||
Example:
|
||||
```mdx
|
||||
- Added `memory` configuration to control memory functionality for conversations
|
||||
- Configure memory persistence and personalization settings
|
||||
- Set token limits and message window sizes for memory context
|
||||
- Configure agents for memory processing with provider-specific settings
|
||||
- Supports both predefined agents (by ID) and custom agent configurations
|
||||
- See [Memory Configuration](/docs/configuration/librechat_yaml/object_structure/memory) for details
|
||||
```
|
||||
|
||||
### 2. Create Object Structure Documentation
|
||||
|
||||
For new root-level configurations, create: `pages/docs/configuration/librechat_yaml/object_structure/{feature}.mdx`
|
||||
|
||||
Structure:
|
||||
1. **Title**: `# {Feature} Configuration`
|
||||
2. **Overview**: Brief description of the feature
|
||||
3. **Example**: Complete YAML example showing all options
|
||||
4. **Field Documentation**: Use `<OptionTable>` components for each field
|
||||
5. **Subsections**: For complex nested objects
|
||||
6. **Notes**: Important considerations at the end
|
||||
|
||||
### 3. Update Navigation
|
||||
|
||||
Add the new feature to: `pages/docs/configuration/librechat_yaml/object_structure/_meta.ts`
|
||||
|
||||
Insert alphabetically or logically within the structure:
|
||||
```ts
|
||||
export default {
|
||||
config: 'Root Settings',
|
||||
file_config: 'File Config',
|
||||
interface: 'Interface (UI)',
|
||||
// ... other entries
|
||||
memory: 'Memory', // Add new entry
|
||||
// ... remaining entries
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Update Main Config Documentation
|
||||
|
||||
In `pages/docs/configuration/librechat_yaml/object_structure/config.mdx`:
|
||||
|
||||
1. Update the version example:
|
||||
```yaml
|
||||
['version', 'String', 'Specifies the version of the configuration file.', 'version: 1.2.7' ],
|
||||
```
|
||||
|
||||
2. Add the new configuration section (insert alphabetically or logically):
|
||||
```mdx
|
||||
## memory
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['memory', 'Object', 'Brief description of the feature.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Subkeys:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['field1', 'Type', 'Description', ''],
|
||||
['field2', 'Type', 'Description', ''],
|
||||
// ... other fields
|
||||
]}
|
||||
/>
|
||||
|
||||
see: [Memory Object Structure](/docs/configuration/librechat_yaml/object_structure/memory)
|
||||
```
|
||||
|
||||
## Documentation Standards
|
||||
|
||||
### OptionTable Usage
|
||||
```mdx
|
||||
<OptionTable
|
||||
options={[
|
||||
['fieldName', 'Type', 'Description of what the field does.', 'example: value'],
|
||||
]}
|
||||
/>
|
||||
```
|
||||
|
||||
### YAML Examples
|
||||
- Use `filename` attribute for code blocks: ` ```yaml filename="memory" `
|
||||
- Show realistic, working examples
|
||||
- Include comments only when necessary for clarity
|
||||
|
||||
### Field Descriptions
|
||||
- Be precise about default values
|
||||
- Explain the impact of different settings
|
||||
- Note any relationships between fields
|
||||
- Mention when fields are required vs optional
|
||||
|
||||
### Special Considerations
|
||||
- For boolean fields that give users control, clarify WHO gets the control (admin vs end-user)
|
||||
- For fields that replace default behavior, explicitly state this
|
||||
- For union types, show examples of each variant
|
||||
- For nested objects, create subsections with their own OptionTables
|
||||
|
||||
## Version Numbering
|
||||
- Config versions follow semantic versioning: v{MAJOR}.{MINOR}.{PATCH}
|
||||
- Adding new root-level configurations typically warrants a minor version bump
|
||||
- Breaking changes require a major version bump
|
||||
- Bug fixes or minor adjustments use patch versions
|
||||
@@ -4,7 +4,7 @@ Based on [Nextra](https://nextra.site/)
|
||||
|
||||
## Local Development
|
||||
|
||||
Pre-requisites: Node.js 18+, pnpm 9+
|
||||
Pre-requisites: Node.js v20.19.0+ (or ^22.12.0 or >= 23.0.0), pnpm 9+
|
||||
|
||||
1. Optional: Create env based on [.env.template](./.env.template)
|
||||
2. Run `pnpm i` to install the dependencies.
|
||||
|
||||
12
components/changelog/content/config_v1.2.7.mdx
Normal file
12
components/changelog/content/config_v1.2.7.mdx
Normal file
@@ -0,0 +1,12 @@
|
||||
- Added `memory` configuration to control memory functionality for conversations
|
||||
- Configure memory persistence and personalization settings
|
||||
- Set token limits and message window sizes for memory context
|
||||
- Configure agents for memory processing with provider-specific settings
|
||||
- Supports both predefined agents (by ID) and custom agent configurations
|
||||
- See [Memory Configuration](/docs/configuration/librechat_yaml/object_structure/memory) for details
|
||||
|
||||
- Added memory-related capabilities to conversation processing
|
||||
- Enables conversation memory and personalization features
|
||||
- Configurable token limits and context window management
|
||||
- Integration with agent-based memory processing
|
||||
- Defaults to personalization enabled with a 5-message window size
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
date: 2025/5/7
|
||||
date: 2025/5/8
|
||||
title: ⚙️ Config v1.2.6
|
||||
---
|
||||
|
||||
|
||||
13
pages/changelog/config_v1.2.7.mdx
Normal file
13
pages/changelog/config_v1.2.7.mdx
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
date: 2025/6/9
|
||||
title: ⚙️ Config v1.2.7
|
||||
---
|
||||
|
||||
import { ChangelogHeader } from '@/components/changelog/ChangelogHeader'
|
||||
import Content from '@/components/changelog/content/config_v1.2.7.mdx'
|
||||
|
||||
<ChangelogHeader />
|
||||
|
||||
---
|
||||
|
||||
<Content />
|
||||
@@ -27,4 +27,14 @@ This section will cover how to configure OAuth2 and OpenID Connect with LibreCha
|
||||
- [AWS Cognito](/docs/configuration/authentication/OAuth2-OIDC/aws)
|
||||
- [Azure Entra/AD](/docs/configuration/authentication/OAuth2-OIDC/azure)
|
||||
- [Keycloak](/docs/configuration/authentication/OAuth2-OIDC/keycloak)
|
||||
- [Re-use OpenID Tokens for Login Session](/docs/configuration/authentication/OAuth2-OIDC/token-reuse)
|
||||
- [Re-use OpenID Tokens for Login Session](/docs/configuration/authentication/OAuth2-OIDC/token-reuse)
|
||||
|
||||
## Troubleshooting OpenID Connect
|
||||
|
||||
If you encounter issues with OpenID Connect authentication:
|
||||
|
||||
1. **Enable Header Debug Logging**: Set `DEBUG_OPENID_REQUESTS=true` in your environment variables to log request headers in addition to URLs (with sensitive data masked). Note: Request URLs are always logged at debug level
|
||||
2. **Check Redirect URIs**: Ensure your callback URL matches exactly between your provider and LibreChat configuration
|
||||
3. **Verify Scopes**: Make sure all required scopes are properly configured
|
||||
4. **Review Provider Logs**: Check your identity provider's logs for authentication errors
|
||||
5. **Validate Tokens**: Ensure your provider is issuing valid tokens with the expected claims
|
||||
@@ -100,4 +100,5 @@ If you encounter issues with token reuse:
|
||||
2. Check that admin consent has been granted
|
||||
3. Ensure the API permissions are correctly set up
|
||||
4. Verify the token cache is working as expected
|
||||
5. Check the application logs for any authentication errors
|
||||
5. Check the application logs for any authentication errors
|
||||
6. Enable detailed OpenID request header logging by setting `DEBUG_OPENID_REQUESTS=true` in your environment variables to see request headers in addition to URLs (with sensitive data masked)
|
||||
@@ -1012,6 +1012,7 @@ For more information:
|
||||
['OPENID_IMAGE_URL', 'string', 'The URL of the OpenID login button image.','OPENID_IMAGE_URL='],
|
||||
['OPENID_USE_END_SESSION_ENDPOINT', 'string', 'Whether to use the Issuer End Session Endpoint as a Logout Redirect','OPENID_USE_END_SESSION_ENDPOINT=TRUE'],
|
||||
['OPENID_AUTO_REDIRECT', 'boolean', 'Whether to automatically redirect to the OpenID provider.','OPENID_AUTO_REDIRECT=true'],
|
||||
['DEBUG_OPENID_REQUESTS', 'boolean', 'Enable detailed logging of OpenID request headers. When disabled (default), only request URLs are logged at debug level. When enabled, request headers are also logged (with sensitive data masked) for deeper debugging of authentication issues.','DEBUG_OPENID_REQUESTS=false'],
|
||||
]}
|
||||
/>
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ export default {
|
||||
model_specs: 'Model Specs',
|
||||
registration: 'Registration',
|
||||
balance: 'Balance',
|
||||
memory: 'Memory',
|
||||
agents: 'Agents',
|
||||
mcp_servers: 'MCP Servers',
|
||||
aws_bedrock: 'AWS Bedrock',
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['version', 'String', 'Specifies the version of the configuration file.', 'version: 1.0.8' ],
|
||||
['version', 'String', 'Specifies the version of the configuration file.', 'version: 1.2.7' ],
|
||||
]}
|
||||
/>
|
||||
|
||||
@@ -258,6 +258,29 @@ see also:
|
||||
- [alloweddomains](/docs/configuration/librechat_yaml/object_structure/registration#alloweddomains),
|
||||
- [Registration Object Structure](/docs/configuration/librechat_yaml/object_structure/registration)
|
||||
|
||||
## memory
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['memory', 'Object', 'Configures conversation memory and personalization features for the application.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Subkeys:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['disabled', 'Boolean', 'Disables memory functionality when set to true.', ''],
|
||||
['validKeys', 'Array of Strings', 'Specifies which keys are valid for memory storage.', ''],
|
||||
['tokenLimit', 'Number', 'Sets the maximum number of tokens for memory storage and processing.', ''],
|
||||
['personalize', 'Boolean', 'Enables or disables personalization features.', ''],
|
||||
['messageWindowSize', 'Number', 'Specifies the number of recent messages to include in memory context.', ''],
|
||||
['agent', 'Object | Union', 'Configures the agent responsible for memory processing.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
see: [Memory Object Structure](/docs/configuration/librechat_yaml/object_structure/memory)
|
||||
|
||||
## actions
|
||||
|
||||
**Key:**
|
||||
|
||||
@@ -0,0 +1,216 @@
|
||||
# Memory Configuration
|
||||
|
||||
## Overview
|
||||
|
||||
The `memory` object allows you to configure conversation memory and personalization features for the application. This configuration controls how the system remembers and personalizes conversations, including token limits, message context windows, and agent-based memory processing.
|
||||
|
||||
## Example
|
||||
|
||||
```yaml filename="memory"
|
||||
memory:
|
||||
disabled: false
|
||||
validKeys: ["user_preferences", "conversation_context", "personal_info"]
|
||||
tokenLimit: 2000
|
||||
personalize: true
|
||||
messageWindowSize: 5
|
||||
agent:
|
||||
provider: "openai"
|
||||
model: "gpt-4"
|
||||
instructions: "You are a helpful assistant that remembers user preferences and context."
|
||||
model_parameters:
|
||||
temperature: 0.7
|
||||
max_tokens: 1000
|
||||
```
|
||||
|
||||
## disabled
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['disabled', 'Boolean', 'Disables memory functionality when set to true. When disabled, the system will not store or use conversation memory.', 'disabled: false'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** `false`
|
||||
|
||||
```yaml filename="memory / disabled"
|
||||
memory:
|
||||
disabled: true
|
||||
```
|
||||
|
||||
## validKeys
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['validKeys', 'Array of Strings', 'Specifies which keys are valid for memory storage. This helps control what types of information can be stored in memory.', 'validKeys: ["user_name", "preferences", "context"]'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** No restriction (all keys are valid)
|
||||
|
||||
```yaml filename="memory / validKeys"
|
||||
memory:
|
||||
validKeys:
|
||||
- "user_preferences"
|
||||
- "conversation_context"
|
||||
- "personal_information"
|
||||
- "learned_facts"
|
||||
```
|
||||
|
||||
## tokenLimit
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['tokenLimit', 'Number', 'Sets the maximum number of tokens that can be used for memory storage and processing.', 'tokenLimit: 2000'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** No limit
|
||||
|
||||
```yaml filename="memory / tokenLimit"
|
||||
memory:
|
||||
tokenLimit: 2000
|
||||
```
|
||||
|
||||
## personalize
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['personalize', 'Boolean', 'When set to true, gives users the ability to opt in or out of using memory features. Users can toggle memory on/off in their chat interface. When false, memory features are completely disabled.', 'personalize: true'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** `true`
|
||||
|
||||
```yaml filename="memory / personalize"
|
||||
memory:
|
||||
personalize: false
|
||||
```
|
||||
|
||||
## messageWindowSize
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['messageWindowSize', 'Number', 'Specifies the number of recent messages to include in the memory context window.', 'messageWindowSize: 5'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** `5`
|
||||
|
||||
```yaml filename="memory / messageWindowSize"
|
||||
memory:
|
||||
messageWindowSize: 10
|
||||
```
|
||||
|
||||
## agent
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['agent', 'Object | Union', 'Configures the agent responsible for memory processing. Can be either a reference to an existing agent by ID or a complete agent configuration.', 'agent: { provider: "openai", model: "gpt-4" }'],
|
||||
]}
|
||||
/>
|
||||
|
||||
The `agent` field supports two different configuration formats:
|
||||
|
||||
### Agent by ID
|
||||
|
||||
When you have a pre-configured agent, you can reference it by its ID:
|
||||
|
||||
```yaml filename="memory / agent (by ID)"
|
||||
memory:
|
||||
agent:
|
||||
id: "memory-agent-001"
|
||||
```
|
||||
|
||||
### Custom Agent Configuration
|
||||
|
||||
For more control, you can define a complete agent configuration:
|
||||
|
||||
```yaml filename="memory / agent (custom)"
|
||||
memory:
|
||||
agent:
|
||||
provider: "openai"
|
||||
model: "gpt-4"
|
||||
instructions: "You are a memory assistant that helps maintain conversation context and user preferences."
|
||||
model_parameters:
|
||||
temperature: 0.3
|
||||
max_tokens: 1500
|
||||
top_p: 0.9
|
||||
```
|
||||
|
||||
#### Agent Configuration Fields
|
||||
|
||||
When using custom agent configuration, the following fields are available:
|
||||
|
||||
**provider** (required)
|
||||
<OptionTable
|
||||
options={[
|
||||
['provider', 'String', 'Specifies the AI provider for the memory agent (e.g., "openai", "anthropic", "google").', 'provider: "openai"'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**model** (required)
|
||||
<OptionTable
|
||||
options={[
|
||||
['model', 'String', 'Specifies the model to use for memory processing.', 'model: "gpt-4"'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**instructions** (optional)
|
||||
<OptionTable
|
||||
options={[
|
||||
['instructions', 'String', 'Custom instructions that replace the default instructions for when to set and/or delete memory. Should mainly be used when using validKeys that require specific information handling.', 'instructions: "Only store user preferences and facts when explicitly mentioned."'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**model_parameters** (optional)
|
||||
<OptionTable
|
||||
options={[
|
||||
['model_parameters', 'Object', 'Additional parameters to pass to the model for fine-tuning its behavior.', 'model_parameters: { temperature: 0.7 }'],
|
||||
]}
|
||||
/>
|
||||
|
||||
## Complete Configuration Example
|
||||
|
||||
Here's a comprehensive example showing all memory configuration options:
|
||||
|
||||
```yaml filename="librechat.yaml"
|
||||
version: 1.2.7
|
||||
cache: true
|
||||
|
||||
memory:
|
||||
disabled: false
|
||||
validKeys:
|
||||
- "user_preferences"
|
||||
- "conversation_context"
|
||||
- "learned_facts"
|
||||
- "personal_information"
|
||||
tokenLimit: 3000
|
||||
personalize: true
|
||||
messageWindowSize: 8
|
||||
agent:
|
||||
provider: "openai"
|
||||
model: "gpt-4"
|
||||
instructions: |
|
||||
Store memory using only the specified validKeys. For user_preferences: save
|
||||
explicitly stated preferences about communication style, topics of interest,
|
||||
or workflow preferences. For conversation_context: save important facts or
|
||||
ongoing projects mentioned. For learned_facts: save objective information
|
||||
about the user. For personal_information: save only what the user explicitly
|
||||
shares about themselves. Delete outdated or incorrect information promptly.
|
||||
model_parameters:
|
||||
temperature: 0.2
|
||||
max_tokens: 2000
|
||||
top_p: 0.8
|
||||
frequency_penalty: 0.1
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Memory functionality enhances conversation continuity and personalization
|
||||
- When `personalize` is true, users get a toggle in their chat interface to control memory usage
|
||||
- Token limits help control memory usage and processing costs
|
||||
- Valid keys provide granular control over what information can be stored
|
||||
- Custom `instructions` replace default memory handling instructions and should be used with `validKeys`
|
||||
- Agent configuration allows customization of memory processing behavior
|
||||
- When disabled, all memory features are turned off regardless of other settings
|
||||
- The message window size affects how much recent context is considered for memory updates
|
||||
@@ -8,7 +8,7 @@ description: Learn how to contribute using GitHub Desktop, VS Code extensions, a
|
||||
## Requirements
|
||||
|
||||
- [Git](https://git-scm.com/downloads) (Essential)
|
||||
- [Node.js](https://nodejs.org/en/download) (Essential, use the LTS version)
|
||||
- [Node.js](https://nodejs.org/en/download) (Essential, use v20.19.0+ or ^22.12.0 or >= 23.0.0)
|
||||
- [MongoDB](https://www.mongodb.com/try/download/community) (Essential, for the database)
|
||||
- [Git LFS](https://git-lfs.com/) (Useful for larger files)
|
||||
- [GitHub Desktop](https://desktop.github.com/) (Optional)
|
||||
|
||||
@@ -4,6 +4,7 @@ export default {
|
||||
code_interpreter: 'Code Interpreter API',
|
||||
artifacts: 'Artifacts - Generative UI',
|
||||
web_search: 'Web Search',
|
||||
memory: 'Memory',
|
||||
image_gen: 'Image Generation',
|
||||
// local_setup: 'Local Setup',
|
||||
// custom_endpoints: 'Custom Endpoints',
|
||||
|
||||
@@ -37,6 +37,14 @@ import Image from 'next/image'
|
||||
- **Flexible Configuration**: Choose from multiple services for each component
|
||||
- **[Learn More →](/docs/features/web_search)**
|
||||
|
||||
### 🧠 [Memory](/docs/features/memory)
|
||||
|
||||
- **Persistent Context**: Remember information across conversations for a personalized experience
|
||||
- **User Control**: Users can toggle memory on/off for individual chats when enabled, as well as create, edit and delete memories manually
|
||||
- **Customizable Storage**: Control what types of information can be stored with valid keys and token limits
|
||||
- **Configuration Required**: Must be explicitly configured in `librechat.yaml` to work
|
||||
- **[Learn More →](/docs/features/memory)**
|
||||
|
||||
### 🪄 **[Artifacts](/docs/features/artifacts)**
|
||||
|
||||
- **Generative UI:** React, HTML, Mermaid diagrams
|
||||
|
||||
172
pages/docs/features/memory.mdx
Normal file
172
pages/docs/features/memory.mdx
Normal file
@@ -0,0 +1,172 @@
|
||||
---
|
||||
title: Memory
|
||||
description: Enable conversation memory and personalization features in LibreChat
|
||||
---
|
||||
|
||||
# Memory
|
||||
|
||||
## Overview
|
||||
|
||||
Memory in LibreChat allows the system to remember information across conversations, providing a more personalized and context-aware experience. When enabled, the AI can recall user preferences, important facts, and conversation context to enhance future interactions.
|
||||
|
||||
<Callout type="important" title="⚠️ Configuration Required">
|
||||
Memory functionality must be explicitly configured in your `librechat.yaml` file to work. It is not enabled by default.
|
||||
</Callout>
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Persistent Context**: Information learned in one conversation can be recalled in future conversations
|
||||
- **User Control**: When enabled, users can toggle memory on/off for their individual chats
|
||||
- **Customizable Storage**: Control what types of information can be stored using valid keys
|
||||
- **Token Management**: Set limits on memory usage to control costs
|
||||
- **Agent Integration**: Use AI agents to intelligently manage what gets remembered
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable memory features, you need to add the `memory` configuration to your `librechat.yaml` file:
|
||||
|
||||
```yaml filename="librechat.yaml"
|
||||
version: 1.2.7
|
||||
cache: true
|
||||
|
||||
memory:
|
||||
disabled: false # Set to true to completely disable memory
|
||||
personalize: true # Gives users the ability to toggle memory on/off, true by default
|
||||
tokenLimit: 2000 # Maximum tokens for memory storage
|
||||
messageWindowSize: 5 # Number of recent messages to consider
|
||||
agent:
|
||||
provider: "openAI"
|
||||
model: "gpt-4"
|
||||
```
|
||||
|
||||
The provider field should match the accepted values as defined in the [Model Spec Guide](/docs/configuration/librechat_yaml/object_structure/model_specs#endpoint).
|
||||
|
||||
**Note:** If you are using a custom endpoint, the endpoint value must match the defined custom endpoint name exactly.
|
||||
|
||||
See the [Memory Configuration Guide](/docs/configuration/librechat_yaml/object_structure/memory) for detailed configuration options.
|
||||
|
||||
## How It Works
|
||||
|
||||
### 1. Information Storage
|
||||
When memory is enabled, the system can store:
|
||||
- User preferences (communication style, topics of interest)
|
||||
- Important facts and context from conversations
|
||||
- Personal information explicitly shared by users
|
||||
- Ongoing projects or tasks mentioned
|
||||
|
||||
### 2. Context Window
|
||||
The `messageWindowSize` parameter determines how many recent messages are analyzed for memory updates. This helps the system decide what information is worth remembering.
|
||||
|
||||
### 3. User Control
|
||||
When `personalize` is set to `true`:
|
||||
- Users see a memory toggle in their chat interface
|
||||
- They can enable/disable memory for individual conversations
|
||||
- Memory settings persist across sessions
|
||||
|
||||
### 4. Valid Keys
|
||||
You can restrict what types of information are stored by specifying `validKeys`:
|
||||
|
||||
```yaml filename="memory / validKeys"
|
||||
memory:
|
||||
validKeys:
|
||||
- "user_preferences"
|
||||
- "conversation_context"
|
||||
- "learned_facts"
|
||||
- "personal_information"
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Token Limits
|
||||
Set appropriate token limits to balance functionality with cost:
|
||||
- Higher limits allow more comprehensive memory
|
||||
- Lower limits reduce processing costs
|
||||
- Consider your usage patterns and budget
|
||||
|
||||
### 2. Custom Instructions
|
||||
When using `validKeys`, provide custom instructions to the memory agent:
|
||||
|
||||
```yaml filename="memory / agent with instructions"
|
||||
memory:
|
||||
agent:
|
||||
provider: "openAI"
|
||||
model: "gpt-4"
|
||||
instructions: |
|
||||
Store information only in the specified validKeys categories.
|
||||
Focus on explicitly stated preferences and important facts.
|
||||
Delete outdated or corrected information promptly.
|
||||
```
|
||||
|
||||
### 3. Privacy Considerations
|
||||
- Memory stores user information across conversations
|
||||
- Ensure users understand what information is being stored
|
||||
- Consider implementing data retention policies
|
||||
- Provide clear documentation about memory usage
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic Configuration
|
||||
Enable memory with default settings:
|
||||
|
||||
```yaml filename="Basic memory config"
|
||||
memory:
|
||||
tokenLimit: 2000
|
||||
agent:
|
||||
provider: "openAI"
|
||||
model: "gpt-4.1-mini"
|
||||
```
|
||||
|
||||
### Advanced Configuration
|
||||
Full configuration with all options:
|
||||
|
||||
```yaml filename="Advanced memory config"
|
||||
memory:
|
||||
disabled: false
|
||||
validKeys: ["preferences", "context", "facts"]
|
||||
tokenLimit: 3000
|
||||
personalize: true
|
||||
messageWindowSize: 10
|
||||
agent:
|
||||
provider: "anthropic"
|
||||
model: "claude-3-opus-20240229"
|
||||
instructions: "Remember only explicitly stated preferences and key facts."
|
||||
model_parameters:
|
||||
temperature: 0.3
|
||||
```
|
||||
|
||||
For valid model parameters per provider, see the [Model Spec Preset Fields](/docs/configuration/librechat_yaml/object_structure/model_specs#preset-fields).
|
||||
|
||||
### Using Predefined Agents
|
||||
Reference an existing agent by ID:
|
||||
|
||||
```yaml filename="Memory with agent ID"
|
||||
memory:
|
||||
agent:
|
||||
id: "memory-specialist-001"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Memory Not Working
|
||||
1. Verify memory is configured in `librechat.yaml`
|
||||
2. Check that `disabled` is set to `false`
|
||||
3. Ensure the configured agent/model is available
|
||||
4. Verify users have enabled memory in their chat interface
|
||||
|
||||
### High Token Usage
|
||||
1. Reduce `tokenLimit` to control costs
|
||||
2. Decrease `messageWindowSize` to analyze fewer messages
|
||||
3. Use `validKeys` to restrict what gets stored
|
||||
4. Review and optimize agent instructions
|
||||
|
||||
### Inconsistent Memory
|
||||
1. Check if users are toggling memory on/off
|
||||
2. Verify token limits aren't being exceeded
|
||||
3. Ensure consistent agent configuration
|
||||
4. Review stored memory for conflicts
|
||||
|
||||
## Related Features
|
||||
|
||||
- [Agents](/docs/features/agents) - Build custom AI assistants
|
||||
- [Presets](/docs/user_guides/presets) - Save conversation settings
|
||||
- [Fork Messages](/docs/features/fork) - Branch conversations while maintaining context
|
||||
@@ -9,7 +9,8 @@ For most scenarios, Docker Compose is the recommended installation method due to
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 18+: [https://nodejs.org/en/download](https://nodejs.org/en/download)
|
||||
- Node.js v20.19.0+ (or ^22.12.0 or >= 23.0.0): [https://nodejs.org/en/download](https://nodejs.org/en/download)
|
||||
- LibreChat uses CommonJS (CJS) and requires these specific Node.js versions for compatibility with openid-client v6
|
||||
- Git: https://git-scm.com/download/
|
||||
- MongoDB (Atlas or Community Server)
|
||||
- [MongoDB Atlas](/docs/configuration/mongodb/mongodb_atlas)
|
||||
|
||||
@@ -166,7 +166,7 @@ npm -v
|
||||
|
||||

|
||||
|
||||
> Note: this will install some pretty old versions, for npm in particular. For the purposes of this guide, however, this is fine, but this is just a heads up in case you try other things with node in the droplet. Do look up a guide for getting the latest versions of the above as necessary.
|
||||
> Note: this will install some pretty old versions, for npm in particular. LibreChat requires Node.js v20.19.0+ (or ^22.12.0 or >= 23.0.0) for compatibility with openid-client v6 when using CommonJS. If you need to run LibreChat directly on the host (not using Docker), you'll need to install a compatible Node.js version. However, for this Docker-based guide, the Node.js version on the host doesn't matter as the application runs inside containers.
|
||||
|
||||
**Ok, now that you have set up the Droplet, you will now setup the app itself**
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ This guide covers the essential steps for securing your LibreChat deployment wit
|
||||
1. A cloud server (e.g., AWS, Google Cloud, Azure, Digital Ocean).
|
||||
2. A registered domain name.
|
||||
3. Terminal access to your cloud server.
|
||||
4. Node.js and NPM installed on your server.
|
||||
4. Node.js v20.19.0+ (or ^22.12.0 or >= 23.0.0) and NPM installed on your server.
|
||||
|
||||
## Initial Setup
|
||||
|
||||
|
||||
Reference in New Issue
Block a user