mirror of
https://github.com/LibreChat-AI/librechat.ai.git
synced 2026-03-27 02:38:32 +07:00
🤖 refactor: Update Docs for Model Specs (#197)
* ✨ style: enhance toc and sidebar footer with gradient background and sticky positioning
* refactor: URL encoding guidelines for query parameters
* WIP: re-organize model specs information
* WIP: remove duplicate docs
* docs: add imageDetail parameter for image quality in URL query documentation
* docs: update model specs documentation with parameter descriptions and examples
* docs: add required field indication for list in model specs documentation
* docs: add append_current_datetime option to model specs and URL query documentation
This commit is contained in:
@@ -30,10 +30,10 @@ modelSpecs:
|
||||
list:
|
||||
- name: "meeting-notes-gpt4"
|
||||
label: "Meeting Notes Assistant (GPT4)"
|
||||
default: true
|
||||
description: "Generate meeting notes by simply pasting in the transcript from a Teams recording."
|
||||
iconURL: "https://example.com/icon.png"
|
||||
preset:
|
||||
default: true
|
||||
endpoint: "azureOpenAI"
|
||||
model: "gpt-4-turbo-1106-preview"
|
||||
maxContextTokens: 128000 # Maximum context tokens
|
||||
@@ -59,9 +59,12 @@ modelSpecs:
|
||||
Take a deep breath and be sure to think step by step.
|
||||
```
|
||||
|
||||
## enforce
|
||||
---
|
||||
|
||||
## **Top-level Fields**
|
||||
|
||||
### enforce
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['enforce', 'Boolean', 'Determines whether the model specifications should strictly override other configuration settings.', 'Setting this to `true` can lead to conflicts with interface options if not managed carefully.'],
|
||||
@@ -76,9 +79,10 @@ modelSpecs:
|
||||
enforce: true
|
||||
```
|
||||
|
||||
## prioritize
|
||||
---
|
||||
|
||||
### prioritize
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['prioritize', 'Boolean', 'Specifies if model specifications should take priority over the default configuration when both are applicable.', 'When set to `true`, it ensures that a modelSpec is always selected in the UI. Doing this may prevent users from selecting different endpoints for the selected spec.'],
|
||||
@@ -93,80 +97,169 @@ modelSpecs:
|
||||
prioritize: false
|
||||
```
|
||||
|
||||
## list
|
||||
---
|
||||
|
||||
### list
|
||||
|
||||
**Required**
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['list', 'Array of Objects', 'Contains a list of individual model specifications detailing various configurations and behaviors.', 'Each object in the list details the configuration for a specific model, including its behaviors, appearance, and capabilities related to the application\'s functionality.'],
|
||||
]}
|
||||
/>
|
||||
|
||||
Each spec object in the `list` can have the following settings:
|
||||
## **Model Spec (List Item)**
|
||||
|
||||
### **Overview**
|
||||
Within each **Model Spec**, or each **list** item, you can configure the following fields:
|
||||
|
||||
- `name`
|
||||
- Unique identifier for the model.
|
||||
- `label`
|
||||
- A user-friendly name or label for the model, shown in the header dropdown.
|
||||
- `description`
|
||||
- A brief description of the model and its intended use or role, shown in the header dropdown menu.
|
||||
- `iconURL`
|
||||
- URL or a predefined endpoint name for the model's icon.
|
||||
- `default`
|
||||
- Specifies if this model spec is the default selection, to be auto-selected on every new chat.
|
||||
- `showIconInMenu`
|
||||
- Controls whether the model's icon appears in the header dropdown menu.
|
||||
- `showIconInHeader`
|
||||
- Controls whether the model's icon appears in the header dropdown button, left of its name.
|
||||
- `preset`
|
||||
- Detailed preset configurations that define the behavior and capabilities of the model (see preset object structure section below for more details).
|
||||
---
|
||||
|
||||
## Preset Object Structure
|
||||
### name
|
||||
|
||||
The preset field for a modelSpec list item is made up of a comprehensive configuration blueprint for AI models within the system. It is designed to specify the operational settings of AI models, tailoring their behavior, outputs, and interactions with other system components and endpoints.
|
||||
<OptionTable
|
||||
options={[
|
||||
['name', 'String', 'Unique identifier for the model.', 'No default. Must be specified.'],
|
||||
]}
|
||||
/>
|
||||
|
||||
### endpoint
|
||||
**Description:**
|
||||
Unique identifier for the model.
|
||||
|
||||
---
|
||||
|
||||
### label
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['label', 'String', 'A user-friendly name or label for the model, shown in the header dropdown.', 'No default. Optional.'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Description:**
|
||||
A user-friendly name or label for the model, shown in the header dropdown.
|
||||
|
||||
---
|
||||
|
||||
### default
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['default', 'Boolean', 'Specifies if this model spec is the default selection, to be auto-selected on every new chat.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Description:**
|
||||
Specifies if this model spec is the default selection, to be auto-selected on every new chat.
|
||||
|
||||
---
|
||||
|
||||
### iconURL
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['iconURL', 'String', 'URL or a predefined endpoint name for the model\'s icon.', 'No default. Optional.'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Description:**
|
||||
URL or a predefined endpoint name for the model's icon.
|
||||
|
||||
---
|
||||
|
||||
### description
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['description', 'String', 'A brief description of the model and its intended use or role, shown in the header dropdown menu.', 'No default. Optional.'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Description:**
|
||||
A brief description of the model and its intended use or role, shown in the header dropdown menu.
|
||||
|
||||
---
|
||||
|
||||
### showIconInMenu
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['showIconInMenu', 'Boolean', 'Controls whether the model\'s icon appears in the header dropdown menu.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Description:**
|
||||
Controls whether the model's icon appears in the header dropdown menu.
|
||||
|
||||
---
|
||||
|
||||
### showIconInHeader
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['showIconInHeader', 'Boolean', 'Controls whether the model\'s icon appears in the header dropdown button, left of its name.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Description:**
|
||||
Controls whether the model's icon appears in the header dropdown button, left of its name.
|
||||
|
||||
---
|
||||
|
||||
### preset
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['preset', 'Object', 'Detailed preset configurations that define the behavior and capabilities of the model.', 'See "Preset Object Structure" below.'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Description:**
|
||||
Detailed preset configurations that define the behavior and capabilities of the model (see Preset Object Structure below).
|
||||
|
||||
---
|
||||
|
||||
## Preset Fields
|
||||
|
||||
The `preset` field for a `modelSpecs.list` item is made up of a comprehensive configuration blueprint for AI models within the system. It is designed to specify the operational settings of AI models, tailoring their behavior, outputs, and interactions with other system components and endpoints.
|
||||
|
||||
### System Options
|
||||
|
||||
#### endpoint
|
||||
|
||||
**Required**
|
||||
|
||||
**Accepted Values:**
|
||||
- `openAI`
|
||||
- `azureOpenAI`
|
||||
- `google`
|
||||
- `anthropic`
|
||||
- `assistants`
|
||||
- `azureAssistants`
|
||||
- `bedrock`
|
||||
- `agents`
|
||||
|
||||
**Note:** If you are using a custom endpoint, the `endpoint` value must match the defined [custom endpoint name](/docs/configuration/librechat_yaml/object_structure/custom_endpoint#name) exactly.
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['endpoint', 'Enum (EModelEndpoint) or String (nullable)', 'Specifies the endpoint the model communicates with to execute operations. This setting determines the external or internal service that the model interfaces with.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** `None`
|
||||
|
||||
**Example:**
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / endpoint"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
```
|
||||
|
||||
### model
|
||||
---
|
||||
|
||||
#### modelLabel
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['model', 'String (nullable)', 'The model to use for the preset. This has to correspond to a model configured under endpoints.', 'None'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** `None`
|
||||
|
||||
**Example:**
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / model"
|
||||
preset:
|
||||
model: "gpt-4-turbo"
|
||||
```
|
||||
|
||||
### modelLabel
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['modelLabel', 'String (nullable, optional)', 'The label used to identify the model in user interfaces or logs. It provides a human-readable name for the model, which is displayed in the UI, as well as made aware to the AI.', 'None'],
|
||||
['modelLabel', 'String (nullable)', 'The label used to identify the model in user interfaces or logs. It provides a human-readable name for the model, which is displayed in the UI, as well as made aware to the AI.', 'None'],
|
||||
]}
|
||||
/>
|
||||
|
||||
@@ -178,94 +271,68 @@ preset:
|
||||
modelLabel: "Customer Support Bot"
|
||||
```
|
||||
|
||||
### greeting
|
||||
---
|
||||
|
||||
#### greeting
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['greeting', 'String (optional)', 'A predefined message that is visible in the UI before a new chat is started. This is a good way to provide instructions to the user, or to make the interface seem more friendly and accessible.', ''],
|
||||
['greeting', 'String', 'A predefined message that is visible in the UI before a new chat is started. This is a good way to provide instructions to the user, or to make the interface seem more friendly and accessible.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** `None`
|
||||
|
||||
**Example:**
|
||||
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / greeting"
|
||||
preset:
|
||||
greeting: "This assistant creates meeting notes based on transcripts of Teams recordings. To start, simply paste the transcript into the chat box."
|
||||
```
|
||||
|
||||
### promptPrefix
|
||||
---
|
||||
|
||||
#### promptPrefix
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['promptPrefix', 'String (nullable, optional)', 'A static text prepended to every prompt sent to the model, setting a consistent context for responses.', 'When using "assistants" as the endpoint, this becomes the OpenAI field `additional_instructions`.'],
|
||||
['promptPrefix', 'String (nullable)', 'A static text prepended to every prompt sent to the model, setting a consistent context for responses.', 'When using "assistants" as the endpoint, this becomes the OpenAI field `additional_instructions`.'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** `None`
|
||||
|
||||
**Example 1:**
|
||||
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / promptPrefix"
|
||||
preset:
|
||||
promptPrefix: "As a financial advisor, ..."
|
||||
```
|
||||
|
||||
**Example 2:**
|
||||
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / promptPrefix"
|
||||
preset:
|
||||
promptPrefix: |
|
||||
Based on the transcript, create coherent meeting minutes for a business meeting. Include the following sections:
|
||||
- Date and Attendees
|
||||
- Agenda
|
||||
- Minutes
|
||||
- Action Items
|
||||
Based on the transcript, create coherent meeting minutes for a business meeting. Include the following sections:
|
||||
- Date and Attendees
|
||||
- Agenda
|
||||
- Minutes
|
||||
- Action Items
|
||||
|
||||
Focus on what items were discussed and/or resolved. List any open action items.
|
||||
The format should be a bulleted list of high level topics in chronological order, and then one or more concise sentences explaining the details.
|
||||
Each high level topic should have at least two sub topics listed, but add as many as necessary to support the high level topic.
|
||||
Focus on what items were discussed and/or resolved. List any open action items.
|
||||
The format should be a bulleted list of high level topics in chronological order, and then one or more concise sentences explaining the details.
|
||||
Each high level topic should have at least two sub topics listed, but add as many as necessary to support the high level topic.
|
||||
|
||||
- Do not start items with the same opening words.
|
||||
- Do not start items with the same opening words.
|
||||
|
||||
Take a deep breath and be sure to think step by step.
|
||||
Take a deep breath and be sure to think step by step.
|
||||
```
|
||||
|
||||
### model_options
|
||||
---
|
||||
|
||||
> These settings control the stochastic nature and behavior of model responses, affecting creativity, relevance, and variability. Additionally it is possible to specify the number of tokens for the context and output windows.
|
||||
#### resendFiles
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['temperature', 'Number (optional)', 'Model response temperature.', ''],
|
||||
['top_p', 'Number (optional)', 'Value of top_p (nucleus sampling).', ''],
|
||||
['top_k', 'Number (optional)', 'Value of top_k (k-sampling).', ''],
|
||||
['frequency_penalty', 'Number (optional)', 'Penalty for repetition in model responses.', ''],
|
||||
['presence_penalty', 'Number (optional)', 'Penalty for repetition in model responses.', ''],
|
||||
['stop', 'Array of Strings (optional)', 'Stop tokens for model responses.', ''],
|
||||
['maxContextTokens', 'Number (optional)', 'The maximum number of context tokens to provide to the model.', 'This is useful when configuring custom or non-standard models, or when you want to limit the maximum context for this preset.'],
|
||||
['max_tokens', 'Number (optional)', 'The maximum number of output tokens to request from the model.', 'This is useful when configuring custom or non-standard models, e.g. phi-3.'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / model_options"
|
||||
preset:
|
||||
temperature: 0.7
|
||||
top_p: 0.9
|
||||
maxContextTokens: 4096
|
||||
```
|
||||
|
||||
### resendFiles
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['resendFiles', 'Boolean (optional)', 'Indicates whether files should be resent in scenarios where persistent sessions are not maintained.', ''],
|
||||
['resendFiles', 'Boolean', 'Indicates whether files should be resent in scenarios where persistent sessions are not maintained.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
@@ -277,12 +344,18 @@ preset:
|
||||
resendFiles: true
|
||||
```
|
||||
|
||||
### imageDetail
|
||||
---
|
||||
|
||||
#### imageDetail
|
||||
|
||||
**Accepted Values:**
|
||||
- low
|
||||
- auto
|
||||
- high
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['imageDetail', 'eImageDetailSchema (optional)', 'Specifies the level of detail required in image analysis tasks, applicable to models with vision capabilities (OpenAI spec).', ''],
|
||||
['imageDetail', 'Enum (eImageDetailSchema)', 'Specifies the level of detail required in image analysis tasks, applicable to models with vision capabilities (OpenAI spec).', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
@@ -292,66 +365,367 @@ preset:
|
||||
imageDetail: "high"
|
||||
```
|
||||
|
||||
### agentOptions
|
||||
---
|
||||
|
||||
#### maxContextTokens
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['agentOptions', 'Record/Object (optional)', 'Specific to `gptPlugins` endpoint. Can be omitted either partially or completely for default settings', ''],
|
||||
['agent', 'String (optional)', 'Type of agent (either "functions" or "classic"; default: "functions")', ''],
|
||||
['skipCompletion', 'Boolean (optional)', 'Whether to skip automatic completion suggestions (default: true)', ''],
|
||||
['model', 'String (optional)', 'Model version or identifier (default: "gpt-4-turbo")', ''],
|
||||
['temperature', 'Number (optional)', 'Randomness in the model\'s responses (default: 0)', ''],
|
||||
['maxContextTokens', 'Number', 'The maximum number of context tokens to provide to the model.', 'Useful if you want to limit the maximum context for this preset.'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / agentOptions"
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / maxContextTokens"
|
||||
preset:
|
||||
agentOptions:
|
||||
agent: "functions"
|
||||
skipCompletion: false
|
||||
model: "gpt-4-turbo"
|
||||
temperature: 0.5
|
||||
maxContextTokens: 4096
|
||||
```
|
||||
|
||||
### tools
|
||||
---
|
||||
|
||||
### Agent Options
|
||||
|
||||
Note that these options are only applicable when using the `agents` endpoint.
|
||||
|
||||
You should exclude any model options and defer to the agent's configuration as defined in the UI.
|
||||
|
||||
---
|
||||
|
||||
#### agent_id
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['tools', 'Array of Strings (optional)', 'Specific to `gptPlugins` endpoint. List of tool/plugin names.', ''],
|
||||
['agent_id', 'String', 'Identification of an assistant.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
|
||||
**Example:**
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / tools"
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / agent_id"
|
||||
preset:
|
||||
tools: ["dalle", "tavily_search_results_json", "azure-ai-search", "traversaal_search"]
|
||||
agent_id: "agent_someUniqueId"
|
||||
```
|
||||
**Notes:**
|
||||
|
||||
- At the moment, only tools that have credentials provided for them via .env file can be used with modelSpecs, unless the user already had the tool installed.
|
||||
- You can find the names of the tools to filter in [api/app/clients/tools/manifest.json](https://github.com/danny-avila/LibreChat/blob/main/api/app/clients/tools/manifest.json)
|
||||
- Use the `pluginKey` value
|
||||
- Also, any listed under the ".well-known" directory [api/app/clients/tools/.well-known](https://github.com/danny-avila/LibreChat/blob/main/api/app/clients/tools/.well-known)
|
||||
- Use the `name_for_model` value
|
||||
---
|
||||
|
||||
## assistant_options
|
||||
### Assistant Options
|
||||
|
||||
Note that these options are only applicable when using the `assistants` or `azureAssistants` endpoint.
|
||||
|
||||
Similar to [Agents](#agent-options), you should exclude any model options and defer to the assistant's configuration.
|
||||
|
||||
---
|
||||
|
||||
#### assistant_id
|
||||
|
||||
**Key:**
|
||||
<OptionTable
|
||||
options={[
|
||||
['assistant_options', '', 'Configurations specific to assistants, such as identifying an assistant, overriding the assistant\'s instructions.', ''],
|
||||
['assistant_id', 'String (optional)', 'Identification of an assistant.', ''],
|
||||
['instructions', 'String (optional)', 'Overrides the assistant\'s default instructions.', ''],
|
||||
['assistant_id', 'String', 'Identification of an assistant.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / assistant_options"
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / assistant_id"
|
||||
preset:
|
||||
assistant_id: "asst_someUniqueId"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### instructions
|
||||
|
||||
**Note:** this is distinct from [`promptPrefix`](#promptPrefix), as this overrides existing assistant instructions for current runs.
|
||||
|
||||
Only use this if you want to override the assistant's core instructions.
|
||||
|
||||
Use [`promptPrefix`](#promptPrefix) for `additional_instructions`.
|
||||
|
||||
More information:
|
||||
|
||||
- https://platform.openai.com/docs/api-reference/models#runs-createrun-instructions
|
||||
- https://platform.openai.com/docs/api-reference/runs/createRun#runs-createrun-additional_instructions
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['instructions', 'String', 'Overrides the assistant\'s default instructions.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / instructions"
|
||||
preset:
|
||||
assistant_id: "asst_98765"
|
||||
instructions: "Please handle customer queries regarding order status."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### append_current_datetime
|
||||
|
||||
Adds the current date and time to `additional_instructions` for each run. Does not overwrite `promptPrefix`, but adds to it.
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['append_current_datetime', 'Boolean', 'Adds the current date and time to `additional_instructions` as defined by `promptPrefix`', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml filename="modelSpecs / list / {spec_item} / preset / append_current_datetime"
|
||||
preset:
|
||||
append_current_datetime: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Model Options
|
||||
|
||||
> **Note:** Each parameter below includes a note on which endpoints support it.
|
||||
> **OpenAI / AzureOpenAI / Custom** typically support `temperature`, `presence_penalty`, `frequency_penalty`, `stop`, `top_p`, `max_tokens`.
|
||||
> **Google / Anthropic** typically support `topP`, `topK`, `maxOutputTokens`, `promptCache` (Anthropic only).
|
||||
> **Bedrock** supports `region`, `maxTokens`, and a few others.
|
||||
|
||||
#### model
|
||||
|
||||
> **Supported by:** All endpoints (except `agents`)
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['model', 'String (nullable)', 'The model name to use for the preset, matching a configured model under the chosen endpoint.', 'None'],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Default:** `None`
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
model: "gpt-4-turbo"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### temperature
|
||||
|
||||
> **Supported by:** `openAI`, `azureOpenAI`, `google` (as `temperature`), `anthropic` (as `temperature`), and custom (OpenAI-like)
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['temperature', 'Number', 'Controls how deterministic or “creative” the model responses are.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
temperature: 0.7
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### presence_penalty
|
||||
|
||||
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like)
|
||||
> *Not typically used by Google/Anthropic/Bedrock*
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['presence_penalty', 'Number', 'Penalty for repetitive tokens, encouraging exploration of new topics.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
presence_penalty: 0.3
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### frequency_penalty
|
||||
|
||||
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like)
|
||||
> *Not typically used by Google/Anthropic/Bedrock*
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['frequency_penalty', 'Number', 'Penalty for repeated tokens, reducing redundancy in responses.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
frequency_penalty: 0.5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### stop
|
||||
|
||||
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like)
|
||||
> *Not typically used by Google/Anthropic/Bedrock*
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['stop', 'Array of Strings', 'Stop tokens for the model, instructing it to end its response if encountered.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
stop:
|
||||
- "END"
|
||||
- "STOP"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### top_p
|
||||
|
||||
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like)
|
||||
> **Google/Anthropic** often use `topP` (capital “P”) instead of `top_p`.
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['top_p', 'Number', 'Nucleus sampling parameter (0-1), controlling the randomness of tokens.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
top_p: 0.9
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### topP
|
||||
|
||||
> **Supported by:** `google` & `anthropic`
|
||||
> (similar purpose to `top_p`, but named differently in those APIs)
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['topP', 'Number', 'Nucleus sampling parameter for Google/Anthropic endpoints.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
topP: 0.8
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### topK
|
||||
|
||||
> **Supported by:** `google` & `anthropic`
|
||||
> (k-sampling limit on the next token distribution)
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['topK', 'Number', 'Limits the next token selection to the top K tokens.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
topK: 40
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### max_tokens
|
||||
|
||||
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like)
|
||||
> *For Google/Anthropic, use `maxOutputTokens` or `maxTokens` (depending on the endpoint).*
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['max_tokens', 'Number', 'The maximum number of tokens in the model response.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
max_tokens: 4096
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### maxOutputTokens
|
||||
|
||||
> **Supported by:** `google`, `anthropic`
|
||||
> *Equivalent to `max_tokens` for these providers.*
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['maxOutputTokens', 'Number', 'The maximum number of tokens in the response (Google/Anthropic).', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
maxOutputTokens: 2048
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### promptCache
|
||||
|
||||
> **Supported by:** `anthropic`
|
||||
> (Toggle Anthropic’s “prompt-caching” feature)
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['promptCache', 'Boolean', 'Enables or disables Anthropic’s built-in prompt caching.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
promptCache: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### region
|
||||
|
||||
> **Supported by:** `bedrock`
|
||||
> (Used to specify an AWS region for Amazon Bedrock)
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['region', 'String', 'AWS region for Amazon Bedrock endpoints.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
region: "us-east-1"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### maxTokens
|
||||
|
||||
> **Supported by:** `bedrock`
|
||||
> (Used in place of `max_tokens`)
|
||||
|
||||
<OptionTable
|
||||
options={[
|
||||
['maxTokens', 'Number', 'Maximum output tokens for Amazon Bedrock endpoints.', ''],
|
||||
]}
|
||||
/>
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
preset:
|
||||
maxTokens: 1024
|
||||
```
|
||||
@@ -28,6 +28,35 @@ The most common parameters to use are `endpoint` and `model`. Using both is reco
|
||||
https://your-domain.com/c/new?endpoint=azureOpenAI&model=o1-mini
|
||||
```
|
||||
|
||||
### URL Encoding
|
||||
|
||||
Special characters in query params must be properly URL-encoded to work correctly. Common characters that need encoding:
|
||||
|
||||
- `:` → `%3A`
|
||||
- `/` → `%2F`
|
||||
- `?` → `%3F`
|
||||
- `#` → `%23`
|
||||
- `&` → `%26`
|
||||
- `=` → `%3D`
|
||||
- `+` → `%2B`
|
||||
- Space → `%20` (or `+`)
|
||||
|
||||
Example with special characters:
|
||||
```ts
|
||||
Original: `Write a function: def hello()`
|
||||
Encoded: `/c/new?prompt=Write%20a%20function%3A%20def%20hello()`
|
||||
```
|
||||
|
||||
You can use JavaScript's built-in `encodeURIComponent()` function to properly encode prompts:
|
||||
```javascript
|
||||
const prompt = "Write a function: def hello()";
|
||||
const encodedPrompt = encodeURIComponent(prompt);
|
||||
const url = `/c/new?prompt=${encodedPrompt}`;
|
||||
console.log(url);
|
||||
```
|
||||
|
||||
Try running the code in your browser console to see the encoded URL (browser shortcut: `Ctrl+Shift+I`).
|
||||
|
||||
### Endpoint Selection
|
||||
|
||||
The `endpoint` parameter can be used alone:
|
||||
@@ -75,32 +104,6 @@ You can combine this with other parameters:
|
||||
https://your-domain.com/c/new?endpoint=anthropic&model=claude-3-5-sonnet-20241022&prompt=Explain quantum computing
|
||||
```
|
||||
|
||||
#### URL Encoding
|
||||
|
||||
Special characters in the prompt must be properly URL-encoded to work correctly. Common characters that need encoding:
|
||||
|
||||
- `:` → `%3A`
|
||||
- `/` → `%2F`
|
||||
- `?` → `%3F`
|
||||
- `#` → `%23`
|
||||
- `&` → `%26`
|
||||
- `=` → `%3D`
|
||||
- `+` → `%2B`
|
||||
- Space → `%20` (or `+`)
|
||||
|
||||
Example with special characters:
|
||||
```
|
||||
Original: Write a function: def hello()
|
||||
Encoded URL: /c/new?prompt=Write%20a%20function%3A%20def%20hello()
|
||||
```
|
||||
|
||||
You can use JavaScript's built-in `encodeURIComponent()` function to properly encode prompts:
|
||||
```javascript
|
||||
const prompt = "Write a function: def hello()";
|
||||
const encodedPrompt = encodeURIComponent(prompt);
|
||||
const url = `/c/new?prompt=${encodedPrompt}`;
|
||||
```
|
||||
|
||||
### Special Endpoints
|
||||
|
||||
#### Agents
|
||||
@@ -125,6 +128,9 @@ LibreChat supports a wide range of parameters for fine-tuning your conversation
|
||||
- `maxContextTokens`: Override the system-defined context window
|
||||
- `resendFiles`: Control file resubmission in subsequent messages
|
||||
- `promptPrefix`: Set custom instructions/system message
|
||||
- `imageDetail`: 'low', 'auto', or 'high' for image quality
|
||||
- Note: while this is a LibreChat-specific parameter, it only affects the following endpoints:
|
||||
- OpenAI, Custom Endpoints, which are OpenAI-like, and Azure OpenAI, for which this defaults to 'auto'
|
||||
|
||||
### Model Parameters
|
||||
Different endpoints support various parameters:
|
||||
@@ -152,16 +158,27 @@ More info: https://www.anthropic.com/news/prompt-caching
|
||||
|
||||
**Bedrock:**
|
||||
```bash
|
||||
region, maxTokens
|
||||
# Bedrock region
|
||||
region=us-west-2
|
||||
# Bedrock equivalent of `max_tokens`
|
||||
maxTokens=200
|
||||
```
|
||||
|
||||
**Assistants/Azure Assistants:**
|
||||
```bash
|
||||
# overrides existing assistant instructions for current run
|
||||
instructions
|
||||
instructions=your+instructions
|
||||
```
|
||||
```bash
|
||||
# Adds the current date and time to `additional_instructions` for each run.
|
||||
append_current_datetime=true
|
||||
```
|
||||
|
||||
Example with multiple parameters:
|
||||
## More Info
|
||||
|
||||
For more information on any of the above, refer to [Model Spec Preset Fields](/docs/configuration/librechat_yaml/object_structure/model_specs), which shares most parameters.
|
||||
|
||||
**Example with multiple parameters:**
|
||||
```bash
|
||||
https://your-domain.com/c/new?endpoint=google&model=gemini-2.0-flash-exp&temperature=0.7&prompt=Oh hi mark
|
||||
```
|
||||
|
||||
@@ -38,9 +38,35 @@ li > ul {
|
||||
media-outlet {
|
||||
background-color: transparent !important;
|
||||
}
|
||||
|
||||
.nextra-toc-footer, .nextra-sidebar-footer {
|
||||
background-color: transparent !important;
|
||||
box-shadow: none;
|
||||
background: linear-gradient(
|
||||
to bottom,
|
||||
rgba(255, 255, 255, 0),
|
||||
rgba(255, 255, 255, 0.3)
|
||||
) !important;
|
||||
backdrop-filter: blur(4px);
|
||||
-webkit-backdrop-filter: blur(4px);
|
||||
border-top: 1px solid rgba(0, 0, 0, 0.05) !important;
|
||||
box-shadow: none !important;
|
||||
padding: 1rem !important;
|
||||
position: sticky !important;
|
||||
bottom: 0 !important;
|
||||
}
|
||||
|
||||
.nextra-toc-footer {
|
||||
padding-bottom: 2rem !important;
|
||||
}
|
||||
|
||||
/* For dark mode */
|
||||
:is(html[class~='dark']) .nextra-toc-footer,
|
||||
:is(html[class~='dark']) .nextra-sidebar-footer {
|
||||
background: linear-gradient(
|
||||
to bottom,
|
||||
rgba(0, 0, 0, 0),
|
||||
rgba(0, 0, 0, 0.75)
|
||||
) !important;
|
||||
border-top: 1px solid rgba(255, 255, 255, 0.05) !important;
|
||||
}
|
||||
|
||||
/* More gap between h2 in articles */
|
||||
|
||||
Reference in New Issue
Block a user