diff --git a/pages/docs/configuration/librechat_yaml/object_structure/model_specs.mdx b/pages/docs/configuration/librechat_yaml/object_structure/model_specs.mdx index 70c35ec..d796b4f 100644 --- a/pages/docs/configuration/librechat_yaml/object_structure/model_specs.mdx +++ b/pages/docs/configuration/librechat_yaml/object_structure/model_specs.mdx @@ -30,10 +30,10 @@ modelSpecs: list: - name: "meeting-notes-gpt4" label: "Meeting Notes Assistant (GPT4)" + default: true description: "Generate meeting notes by simply pasting in the transcript from a Teams recording." iconURL: "https://example.com/icon.png" preset: - default: true endpoint: "azureOpenAI" model: "gpt-4-turbo-1106-preview" maxContextTokens: 128000 # Maximum context tokens @@ -59,9 +59,12 @@ modelSpecs: Take a deep breath and be sure to think step by step. ``` -## enforce +--- + +## **Top-level Fields** + +### enforce -**Key:** -Each spec object in the `list` can have the following settings: +## **Model Spec (List Item)** -### **Overview** +Within each **Model Spec**, or each **list** item, you can configure the following fields: - - `name` - - Unique identifier for the model. - - `label` - - A user-friendly name or label for the model, shown in the header dropdown. - - `description` - - A brief description of the model and its intended use or role, shown in the header dropdown menu. - - `iconURL` - - URL or a predefined endpoint name for the model's icon. - - `default` - - Specifies if this model spec is the default selection, to be auto-selected on every new chat. - - `showIconInMenu` - - Controls whether the model's icon appears in the header dropdown menu. - - `showIconInHeader` - - Controls whether the model's icon appears in the header dropdown button, left of its name. - - `preset` - - Detailed preset configurations that define the behavior and capabilities of the model (see preset object structure section below for more details). +--- -## Preset Object Structure +### name -The preset field for a modelSpec list item is made up of a comprehensive configuration blueprint for AI models within the system. It is designed to specify the operational settings of AI models, tailoring their behavior, outputs, and interactions with other system components and endpoints. + -### endpoint +**Description:** +Unique identifier for the model. + +--- + +### label + + + +**Description:** +A user-friendly name or label for the model, shown in the header dropdown. + +--- + +### default + + + +**Description:** +Specifies if this model spec is the default selection, to be auto-selected on every new chat. + +--- + +### iconURL + + + +**Description:** +URL or a predefined endpoint name for the model's icon. + +--- + +### description + + + +**Description:** +A brief description of the model and its intended use or role, shown in the header dropdown menu. + +--- + +### showIconInMenu + + + +**Description:** +Controls whether the model's icon appears in the header dropdown menu. + +--- + +### showIconInHeader + + + +**Description:** +Controls whether the model's icon appears in the header dropdown button, left of its name. + +--- + +### preset + + + +**Description:** +Detailed preset configurations that define the behavior and capabilities of the model (see Preset Object Structure below). + +--- + +## Preset Fields + +The `preset` field for a `modelSpecs.list` item is made up of a comprehensive configuration blueprint for AI models within the system. It is designed to specify the operational settings of AI models, tailoring their behavior, outputs, and interactions with other system components and endpoints. + +### System Options + +#### endpoint + +**Required** + +**Accepted Values:** +- `openAI` +- `azureOpenAI` +- `google` +- `anthropic` +- `assistants` +- `azureAssistants` +- `bedrock` +- `agents` + +**Note:** If you are using a custom endpoint, the `endpoint` value must match the defined [custom endpoint name](/docs/configuration/librechat_yaml/object_structure/custom_endpoint#name) exactly. -**Key:** -**Default:** `None` - **Example:** ```yaml filename="modelSpecs / list / {spec_item} / preset / endpoint" preset: endpoint: "openAI" ``` -### model +--- + +#### modelLabel -**Key:** - -**Default:** `None` - -**Example:** -```yaml filename="modelSpecs / list / {spec_item} / preset / model" -preset: - model: "gpt-4-turbo" -``` - -### modelLabel - -**Key:** - @@ -178,94 +271,68 @@ preset: modelLabel: "Customer Support Bot" ``` -### greeting +--- + +#### greeting -**Key:** **Default:** `None` **Example:** - ```yaml filename="modelSpecs / list / {spec_item} / preset / greeting" preset: greeting: "This assistant creates meeting notes based on transcripts of Teams recordings. To start, simply paste the transcript into the chat box." ``` -### promptPrefix +--- + +#### promptPrefix -**Key:** **Default:** `None` **Example 1:** - ```yaml filename="modelSpecs / list / {spec_item} / preset / promptPrefix" preset: promptPrefix: "As a financial advisor, ..." ``` **Example 2:** - ```yaml filename="modelSpecs / list / {spec_item} / preset / promptPrefix" preset: promptPrefix: | - Based on the transcript, create coherent meeting minutes for a business meeting. Include the following sections: - - Date and Attendees - - Agenda - - Minutes - - Action Items + Based on the transcript, create coherent meeting minutes for a business meeting. Include the following sections: + - Date and Attendees + - Agenda + - Minutes + - Action Items - Focus on what items were discussed and/or resolved. List any open action items. - The format should be a bulleted list of high level topics in chronological order, and then one or more concise sentences explaining the details. - Each high level topic should have at least two sub topics listed, but add as many as necessary to support the high level topic. + Focus on what items were discussed and/or resolved. List any open action items. + The format should be a bulleted list of high level topics in chronological order, and then one or more concise sentences explaining the details. + Each high level topic should have at least two sub topics listed, but add as many as necessary to support the high level topic. - - Do not start items with the same opening words. + - Do not start items with the same opening words. - Take a deep breath and be sure to think step by step. + Take a deep breath and be sure to think step by step. ``` -### model_options +--- -> These settings control the stochastic nature and behavior of model responses, affecting creativity, relevance, and variability. Additionally it is possible to specify the number of tokens for the context and output windows. +#### resendFiles -**Key:** - -**Example:** -```yaml filename="modelSpecs / list / {spec_item} / preset / model_options" -preset: - temperature: 0.7 - top_p: 0.9 - maxContextTokens: 4096 -``` - -### resendFiles - -**Key:** - @@ -277,12 +344,18 @@ preset: resendFiles: true ``` -### imageDetail +--- + +#### imageDetail + +**Accepted Values:** +- low +- auto +- high -**Key:** @@ -292,66 +365,367 @@ preset: imageDetail: "high" ``` -### agentOptions +--- + +#### maxContextTokens -**Key:** **Example:** -```yaml filename="modelSpecs / list / {spec_item} / preset / agentOptions" +```yaml filename="modelSpecs / list / {spec_item} / preset / maxContextTokens" preset: - agentOptions: - agent: "functions" - skipCompletion: false - model: "gpt-4-turbo" - temperature: 0.5 + maxContextTokens: 4096 ``` -### tools +--- + +### Agent Options + +Note that these options are only applicable when using the `agents` endpoint. + +You should exclude any model options and defer to the agent's configuration as defined in the UI. + +--- + +#### agent_id -**Key:** - **Example:** -```yaml filename="modelSpecs / list / {spec_item} / preset / tools" +```yaml filename="modelSpecs / list / {spec_item} / preset / agent_id" preset: - tools: ["dalle", "tavily_search_results_json", "azure-ai-search", "traversaal_search"] + agent_id: "agent_someUniqueId" ``` -**Notes:** -- At the moment, only tools that have credentials provided for them via .env file can be used with modelSpecs, unless the user already had the tool installed. -- You can find the names of the tools to filter in [api/app/clients/tools/manifest.json](https://github.com/danny-avila/LibreChat/blob/main/api/app/clients/tools/manifest.json) - - Use the `pluginKey` value -- Also, any listed under the ".well-known" directory [api/app/clients/tools/.well-known](https://github.com/danny-avila/LibreChat/blob/main/api/app/clients/tools/.well-known) - - Use the `name_for_model` value +--- -## assistant_options +### Assistant Options + +Note that these options are only applicable when using the `assistants` or `azureAssistants` endpoint. + +Similar to [Agents](#agent-options), you should exclude any model options and defer to the assistant's configuration. + +--- + +#### assistant_id -**Key:** **Example:** -```yaml filename="modelSpecs / list / {spec_item} / preset / assistant_options" +```yaml filename="modelSpecs / list / {spec_item} / preset / assistant_id" +preset: + assistant_id: "asst_someUniqueId" +``` + +--- + +#### instructions + +**Note:** this is distinct from [`promptPrefix`](#promptPrefix), as this overrides existing assistant instructions for current runs. + +Only use this if you want to override the assistant's core instructions. + +Use [`promptPrefix`](#promptPrefix) for `additional_instructions`. + +More information: + +- https://platform.openai.com/docs/api-reference/models#runs-createrun-instructions +- https://platform.openai.com/docs/api-reference/runs/createRun#runs-createrun-additional_instructions + + + +**Example:** +```yaml filename="modelSpecs / list / {spec_item} / preset / instructions" preset: - assistant_id: "asst_98765" instructions: "Please handle customer queries regarding order status." ``` + +--- + +#### append_current_datetime + +Adds the current date and time to `additional_instructions` for each run. Does not overwrite `promptPrefix`, but adds to it. + + + +**Example:** +```yaml filename="modelSpecs / list / {spec_item} / preset / append_current_datetime" +preset: + append_current_datetime: true +``` + +--- + +### Model Options + +> **Note:** Each parameter below includes a note on which endpoints support it. +> **OpenAI / AzureOpenAI / Custom** typically support `temperature`, `presence_penalty`, `frequency_penalty`, `stop`, `top_p`, `max_tokens`. +> **Google / Anthropic** typically support `topP`, `topK`, `maxOutputTokens`, `promptCache` (Anthropic only). +> **Bedrock** supports `region`, `maxTokens`, and a few others. + +#### model + +> **Supported by:** All endpoints (except `agents`) + + + +**Default:** `None` + +**Example:** +```yaml +preset: + model: "gpt-4-turbo" +``` + +--- + +#### temperature + +> **Supported by:** `openAI`, `azureOpenAI`, `google` (as `temperature`), `anthropic` (as `temperature`), and custom (OpenAI-like) + + + +**Example:** +```yaml +preset: + temperature: 0.7 +``` + +--- + +#### presence_penalty + +> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like) +> *Not typically used by Google/Anthropic/Bedrock* + + + +**Example:** +```yaml +preset: + presence_penalty: 0.3 +``` + +--- + +#### frequency_penalty + +> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like) +> *Not typically used by Google/Anthropic/Bedrock* + + + +**Example:** +```yaml +preset: + frequency_penalty: 0.5 +``` + +--- + +#### stop + +> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like) +> *Not typically used by Google/Anthropic/Bedrock* + + + +**Example:** +```yaml +preset: + stop: + - "END" + - "STOP" +``` + +--- + +#### top_p + +> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like) +> **Google/Anthropic** often use `topP` (capital “P”) instead of `top_p`. + + + +**Example:** +```yaml +preset: + top_p: 0.9 +``` + +--- + +#### topP + +> **Supported by:** `google` & `anthropic` +> (similar purpose to `top_p`, but named differently in those APIs) + + + +**Example:** +```yaml +preset: + topP: 0.8 +``` + +--- + +#### topK + +> **Supported by:** `google` & `anthropic` +> (k-sampling limit on the next token distribution) + + + +**Example:** +```yaml +preset: + topK: 40 +``` + +--- + +#### max_tokens + +> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like) +> *For Google/Anthropic, use `maxOutputTokens` or `maxTokens` (depending on the endpoint).* + + + +**Example:** +```yaml +preset: + max_tokens: 4096 +``` + +--- + +#### maxOutputTokens + +> **Supported by:** `google`, `anthropic` +> *Equivalent to `max_tokens` for these providers.* + + + +**Example:** +```yaml +preset: + maxOutputTokens: 2048 +``` + +--- + +#### promptCache + +> **Supported by:** `anthropic` +> (Toggle Anthropic’s “prompt-caching” feature) + + + +**Example:** +```yaml +preset: + promptCache: true +``` + +--- + +#### region + +> **Supported by:** `bedrock` +> (Used to specify an AWS region for Amazon Bedrock) + + + +**Example:** +```yaml +preset: + region: "us-east-1" +``` + +--- + +#### maxTokens + +> **Supported by:** `bedrock` +> (Used in place of `max_tokens`) + + + +**Example:** +```yaml +preset: + maxTokens: 1024 +``` \ No newline at end of file diff --git a/pages/docs/features/url_query.mdx b/pages/docs/features/url_query.mdx index 9a43dc8..02172df 100644 --- a/pages/docs/features/url_query.mdx +++ b/pages/docs/features/url_query.mdx @@ -28,6 +28,35 @@ The most common parameters to use are `endpoint` and `model`. Using both is reco https://your-domain.com/c/new?endpoint=azureOpenAI&model=o1-mini ``` +### URL Encoding + +Special characters in query params must be properly URL-encoded to work correctly. Common characters that need encoding: + +- `:` → `%3A` +- `/` → `%2F` +- `?` → `%3F` +- `#` → `%23` +- `&` → `%26` +- `=` → `%3D` +- `+` → `%2B` +- Space → `%20` (or `+`) + +Example with special characters: +```ts +Original: `Write a function: def hello()` +Encoded: `/c/new?prompt=Write%20a%20function%3A%20def%20hello()` +``` + +You can use JavaScript's built-in `encodeURIComponent()` function to properly encode prompts: +```javascript +const prompt = "Write a function: def hello()"; +const encodedPrompt = encodeURIComponent(prompt); +const url = `/c/new?prompt=${encodedPrompt}`; +console.log(url); +``` + +Try running the code in your browser console to see the encoded URL (browser shortcut: `Ctrl+Shift+I`). + ### Endpoint Selection The `endpoint` parameter can be used alone: @@ -75,32 +104,6 @@ You can combine this with other parameters: https://your-domain.com/c/new?endpoint=anthropic&model=claude-3-5-sonnet-20241022&prompt=Explain quantum computing ``` -#### URL Encoding - -Special characters in the prompt must be properly URL-encoded to work correctly. Common characters that need encoding: - -- `:` → `%3A` -- `/` → `%2F` -- `?` → `%3F` -- `#` → `%23` -- `&` → `%26` -- `=` → `%3D` -- `+` → `%2B` -- Space → `%20` (or `+`) - -Example with special characters: -``` -Original: Write a function: def hello() -Encoded URL: /c/new?prompt=Write%20a%20function%3A%20def%20hello() -``` - -You can use JavaScript's built-in `encodeURIComponent()` function to properly encode prompts: -```javascript -const prompt = "Write a function: def hello()"; -const encodedPrompt = encodeURIComponent(prompt); -const url = `/c/new?prompt=${encodedPrompt}`; -``` - ### Special Endpoints #### Agents @@ -125,6 +128,9 @@ LibreChat supports a wide range of parameters for fine-tuning your conversation - `maxContextTokens`: Override the system-defined context window - `resendFiles`: Control file resubmission in subsequent messages - `promptPrefix`: Set custom instructions/system message +- `imageDetail`: 'low', 'auto', or 'high' for image quality + - Note: while this is a LibreChat-specific parameter, it only affects the following endpoints: + - OpenAI, Custom Endpoints, which are OpenAI-like, and Azure OpenAI, for which this defaults to 'auto' ### Model Parameters Different endpoints support various parameters: @@ -152,16 +158,27 @@ More info: https://www.anthropic.com/news/prompt-caching **Bedrock:** ```bash -region, maxTokens +# Bedrock region +region=us-west-2 +# Bedrock equivalent of `max_tokens` +maxTokens=200 ``` **Assistants/Azure Assistants:** ```bash # overrides existing assistant instructions for current run -instructions +instructions=your+instructions +``` +```bash +# Adds the current date and time to `additional_instructions` for each run. +append_current_datetime=true ``` -Example with multiple parameters: +## More Info + +For more information on any of the above, refer to [Model Spec Preset Fields](/docs/configuration/librechat_yaml/object_structure/model_specs), which shares most parameters. + +**Example with multiple parameters:** ```bash https://your-domain.com/c/new?endpoint=google&model=gemini-2.0-flash-exp&temperature=0.7&prompt=Oh hi mark ``` diff --git a/src/overrides.css b/src/overrides.css index f04ccfa..de2fb12 100644 --- a/src/overrides.css +++ b/src/overrides.css @@ -38,9 +38,35 @@ li > ul { media-outlet { background-color: transparent !important; } + .nextra-toc-footer, .nextra-sidebar-footer { - background-color: transparent !important; - box-shadow: none; + background: linear-gradient( + to bottom, + rgba(255, 255, 255, 0), + rgba(255, 255, 255, 0.3) + ) !important; + backdrop-filter: blur(4px); + -webkit-backdrop-filter: blur(4px); + border-top: 1px solid rgba(0, 0, 0, 0.05) !important; + box-shadow: none !important; + padding: 1rem !important; + position: sticky !important; + bottom: 0 !important; +} + +.nextra-toc-footer { + padding-bottom: 2rem !important; +} + +/* For dark mode */ +:is(html[class~='dark']) .nextra-toc-footer, +:is(html[class~='dark']) .nextra-sidebar-footer { + background: linear-gradient( + to bottom, + rgba(0, 0, 0, 0), + rgba(0, 0, 0, 0.75) + ) !important; + border-top: 1px solid rgba(255, 255, 255, 0.05) !important; } /* More gap between h2 in articles */