📚 docs: Update Preset Fields and Query Params (#361)

* 📘 docs: Enhance Model Specs documentation with new parameters and default values

- Updated Model Specs documentation to include new parameters: `reasoning_effort`, `reasoning_summary`, `useResponsesApi`, `web_search`, `disableStreaming`, `thinking`, and `thinkingBudget`.
- Added some default values for existing parameters
- Documented changes in changelog

* docs: change note on model params from 'numbers' to 'values' to account for new params that don't have numeric values
This commit is contained in:
Dustin Healy
2025-07-21 14:41:22 -07:00
committed by GitHub
parent bc1ba654d0
commit dfbd8cbd18
3 changed files with 165 additions and 5 deletions

View File

@@ -23,3 +23,4 @@
- Added user placeholder variables support to Custom Endpoint Headers:
- Users can now use `{{LIBRECHAT_USER_ID}}`, `{{LIBRECHAT_USER_EMAIL}}`, and other user field placeholders in custom endpoint headers
- See: [Custom Endpoint Object Structure - Headers](/docs/configuration/librechat_yaml/object_structure/custom_endpoint#headers) for details
- Improved [Model Specs documentation](/docs/configuration/librechat_yaml/object_structure/model_specs) with parameter support updates (disableStreaming, thinking, thinkingBudget, web_search, etc...)

View File

@@ -360,7 +360,7 @@ preset:
]}
/>
**Default:** `false`
**Default:** `true`
**Example:**
```yaml filename="modelSpecs / list / {spec_item} / preset / resendFiles"
@@ -383,6 +383,8 @@ preset:
]}
/>
**Default:** `"auto"`
**Example:**
```yaml filename="modelSpecs / list / {spec_item} / preset / imageDetail"
preset:
@@ -710,6 +712,8 @@ preset:
]}
/>
**Default:** `true`
**Example:**
```yaml
preset:
@@ -718,6 +722,160 @@ preset:
---
#### reasoning_effort
**Accepted Values:**
- None
- Low
- Medium
- High
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like)
<OptionTable
options={[
['reasoning_effort', 'String', 'Controls the reasoning effort level for the model.', ''],
]}
/>
**Default:** `"None"`
**Example:**
```yaml
preset:
reasoning_effort: "low"
```
---
#### reasoning_summary
**Accepted Values:**
- None
- Auto
- Concise
- Detailed
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like)
<OptionTable
options={[
['reasoning_summary', 'String', 'Sets reasoning summary preferences for the model.', ''],
]}
/>
**Default:** `"None"`
**Example:**
```yaml
preset:
reasoning_summary: "detailed"
```
---
#### useResponsesApi
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like)
<OptionTable
options={[
['useResponsesApi', 'Boolean', 'Enables or disables the responses API for the model.', ''],
]}
/>
**Default:** `false`
**Example:**
```yaml
preset:
useResponsesApi: true
```
---
#### web_search
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like), `google`, `anthropic`
<OptionTable
options={[
['web_search', 'Boolean', 'Enables or disables web search functionality for the model.', ''],
]}
/>
**Default:** `false`
**Note:** For Google endpoints, this parameter appears as `Grounding with Google Search` in the actual panel but controls `web_search` in the implementation.
**Example:**
```yaml
preset:
web_search: true
```
---
#### disableStreaming
> **Supported by:** `openAI`, `azureOpenAI`, custom (OpenAI-like)
<OptionTable
options={[
['disableStreaming', 'Boolean', 'Disables streaming responses from the model.', ''],
]}
/>
**Default:** `false`
**Example:**
```yaml
preset:
disableStreaming: true
```
---
#### thinkingBudget
> **Supported by:** `google`, `anthropic`, `bedrock` (Anthropic models)
<OptionTable
options={[
['thinkingBudget', 'Number or String', 'Controls the number of thinking tokens the model can use for internal reasoning. Larger budgets can improve response quality for complex problems.', ''],
]}
/>
**Default:** `"Auto (-1)"` (Google), `2000` (Anthropic, Bedrock (Anthropic models))
**Example:**
```yaml
preset:
thinkingBudget: "2000"
```
---
#### thinking
> **Supported by:** `google`, `anthropic`, `bedrock` (Anthropic models)
<OptionTable
options={[
['thinking', 'Boolean', 'Indicates whether the model should spend time thinking before generating a response.', ''],
]}
/>
**Default:** `true`
**Example:**
```yaml
preset:
thinking: true
```
---
#### region
> **Supported by:** `bedrock`

View File

@@ -160,14 +160,15 @@ Different endpoints support various parameters:
**OpenAI, Custom, Azure OpenAI:**
```bash
# Note: these should be valid numbers according to the provider's API
temperature, presence_penalty, frequency_penalty, stop, top_p, max_tokens
# Note: these should be valid values according to the provider's API
temperature, presence_penalty, frequency_penalty, stop, top_p, max_tokens,
reasoning_effort, reasoning_summary, useResponsesApi, web_search, disableStreaming
```
**Google, Anthropic:**
```bash
# Note: these should be valid numbers according to the provider's API
topP, topK, maxOutputTokens
# Note: these should be valid values according to the provider's API
topP, topK, maxOutputTokens, thinking, thinkingBudget, web_search
```
**Anthropic Specific:**