Files
librechat.ai/content/docs/features/url_query.mdx
Danny Avila 39d2de754d v0.8.3-rc2 (#522)
*  v0.8.3-rc2

- Added new `document_parser` OCR strategy for local text extraction from various document formats.
- Introduced `thinkingLevel` parameter for Gemini 3+ models to control thinking effort.
- Added `reasoning_effort` parameter for Bedrock models to configure reasoning capabilities.
- Enabled document uploads for Bedrock endpoints.
- Updated default model lists to include new Gemini models.
- Changed date template variable format for improved readability.
- Updated OpenRouter reasoning configuration to align with API changes.
- Bumped configuration version to 1.3.5 across multiple documentation files.

* docs: enhance `document_parser` functionality and update OCR configuration details

- Updated the `document_parser` to run automatically for agent file uploads without requiring an `ocr` configuration, providing seamless text extraction from supported document types.
- Added fallback logic for the `document_parser` when a configured OCR strategy fails, ensuring text extraction remains effective.
- Expanded documentation to clarify the automatic operation of the `document_parser` and its limitations regarding image-based documents.

* chore: update changelog for v0.8.3-rc2

- Added new features including credential variables for DB-sourced MCP servers, updates for the `gemini-3.1-flash-lite-preview` window and pricing, and the introduction of gpt-5.3 context window and pricing.
- Enhanced agent editor functionality by allowing duplication of agents.
- Implemented fixes for OIDC logout, post-auth navigation, and URL query parameter preservation.
- Updated various dependencies and improved internationalization with new translations.

* docs: add credential variables support for UI-created MCP servers

- Introduced a new section detailing how users can provide their own API keys when adding MCP servers through the UI.
- Explained the creation of `customUserVars` for user-provided API keys and the security measures in place to prevent unauthorized access to sensitive data.
- Updated documentation to enhance clarity on the configuration process for MCP servers.

* chore: update changelog for v0.8.3-rc2

- Added new features including expanded toolkit definitions for child tools in event-driven mode and consistent Mermaid theming for inline and artifact renderers.
- Updated the Agent Tool with new SVG assets for improved visual representation.

* chore: update changelog for v1.3.5

- Updated release date to 2026-03-04.
- Adjusted date template variable format to reflect the new date and include named weekdays.
- Updated OpenRouter reasoning configuration to align with API changes.
2026-03-04 12:35:45 -05:00

256 lines
8.5 KiB
Plaintext

---
title: Query Parameters
icon: Link
description: Learn how to configure chat conversations using URL query parameters in LibreChat. Set models, endpoints, and conversation settings dynamically.
---
LibreChat supports dynamic configuration of chat conversations through URL query parameters. This feature allows you to initiate conversations with specific settings, models, and endpoints directly from the URL.
### Chat Paths
Query parameters must follow a valid chat path:
- For new conversations: `/c/new?`
- For existing conversations: `/c/[conversation-id]?` (where conversation-id is an existing one)
Examples:
```bash
https://your-domain.com/c/new?endpoint=ollama&model=llama3%3Alatest
https://your-domain.com/c/03debefd-6a50-438a-904d-1a806f82aad4?endpoint=openAI&model=o1-mini
```
## Basic Usage
The most common parameters to use are `endpoint` and `model`. Using both is recommended for the most predictable behavior:
```bash
https://your-domain.com/c/new?endpoint=azureOpenAI&model=o1-mini
```
### URL Encoding
Special characters in query params must be properly URL-encoded to work correctly. Common characters that need encoding:
- `:` → `%3A`
- `/` → `%2F`
- `?` → `%3F`
- `#` → `%23`
- `&` → `%26`
- `=` → `%3D`
- `+` → `%2B`
- Space → `%20` (or `+`)
Example with special characters:
```ts
Original: `Write a function: def hello()`
Encoded: `/c/new?prompt=Write%20a%20function%3A%20def%20hello()`
```
You can use JavaScript's built-in `encodeURIComponent()` function to properly encode prompts:
```javascript
const prompt = "Write a function: def hello()";
const encodedPrompt = encodeURIComponent(prompt);
const url = `/c/new?prompt=${encodedPrompt}`;
console.log(url);
```
Try running the code in your browser console to see the encoded URL (browser shortcut: `Ctrl+Shift+I`).
### Endpoint Selection
The `endpoint` parameter can be used alone:
```bash
https://your-domain.com/c/new?endpoint=google
```
When only `endpoint` is specified:
- It will use the last selected model from localStorage
- If no previous model exists, it will use the first available model in the endpoint's model list
#### Notes
- The `endpoint` value must be one of the following:
```bash
openAI, azureOpenAI, google, anthropic, assistants, azureAssistants, bedrock, agents
```
- If using a [custom endpoint](/docs/quick_start/custom_endpoints), you can use its name as the value (case-insensitive)
```bash
# using `endpoint=perplexity` for a custom endpoint named `Perplexity`
https://your-domain.com/c/new?endpoint=perplexity&model=llama-3.1-sonar-small-128k-online
```
### Model Selection
The `model` parameter can be used alone:
```bash
https://your-domain.com/c/new?model=gpt-4o
```
When only `model` is specified:
- It will only select the model if it's available in the current endpoint
- The current endpoint is either the default endpoint or the last selected endpoint
### Prompt Parameter
The `prompt` parameter allows you to pre-populate the chat input field:
```bash
https://your-domain.com/c/new?prompt=Explain quantum computing
```
You can also use `q` as a shorthand, which is interchangeable with `prompt`:
```bash
https://your-domain.com/c/new?q=Explain quantum computing
```
You can combine these with other parameters:
```bash
https://your-domain.com/c/new?endpoint=anthropic&model=claude-3-5-sonnet-20241022&prompt=Explain quantum computing
```
### Automatic Prompt Submission
The `submit` parameter allows you to automatically submit the prompt without manual intervention:
```bash
https://your-domain.com/c/new?prompt=Explain quantum computing&submit=true
```
This feature is particularly useful for:
- Creating automated workflows (e.g., Raycast, Alfred, Automater)
- Building external integrations
You can combine it with other parameters for complete automation:
```bash
https://your-domain.com/c/new?endpoint=openAI&model=gpt-4&prompt=Explain quantum computing&submit=true
```
### Special Endpoints
#### Model Specs
You can select a specific model spec by name:
```bash
https://your-domain.com/c/new?spec=meeting-notes-gpt4
```
This will load all the settings defined in the model spec. When using the `spec` parameter, other model parameters in the URL will be ignored.
#### Agents
You can directly load an agent using its ID without specifying the endpoint:
```bash
https://your-domain.com/c/new?agent_id=your-agent-id
```
This will automatically set the endpoint to `agents`.
#### Assistants
Similarly, you can load an assistant directly:
```bash
https://your-domain.com/c/new?assistant_id=your-assistant-id
```
This will automatically set the endpoint to `assistants`.
## Supported Parameters
LibreChat supports a wide range of parameters for fine-tuning your conversation settings:
### LibreChat Settings
- `maxContextTokens`: Override the system-defined context window
- `resendFiles`: Control file resubmission in subsequent messages
- `promptPrefix`: Set custom instructions/system message
- `imageDetail`: 'low', 'auto', or 'high' for image quality
- Note: while this is a LibreChat-specific parameter, it only affects the following endpoints:
- OpenAI, Custom Endpoints, which are OpenAI-like, and Azure OpenAI, for which this defaults to 'auto'
- `spec`: Select a specific LibreChat [Model Spec](/docs/configuration/librechat_yaml/object_structure/model_specs) by name
- Must match the exact name of a configured model spec
- When specified, other model parameters will not take effect, only those defined by the model spec
- **Important:** If model specs are configured with `enforce: true`, using this parameter may be required for URL query params to work properly
- `fileTokenLimit`: Set maximum token limit for file processing to control costs and resource usage.
- Note: Request value overrides YAML default.
### Model Parameters
Different endpoints support various parameters:
**OpenAI, Custom, Azure OpenAI:**
```bash
# Note: these should be valid values according to the provider's API
temperature, presence_penalty, frequency_penalty, stop, top_p, max_tokens,
reasoning_effort, reasoning_summary, verbosity, useResponsesApi, web_search, disableStreaming
```
**Google, Anthropic:**
```bash
# Note: these should be valid values according to the provider's API
topP, topK, maxOutputTokens, thinking, thinkingBudget, thinkingLevel, web_search
```
**Anthropic, Bedrock (Anthropic models):**
Set this to `true` or `false` to toggle the "prompt-caching":
```bash
promptCache
```
More info: https://www.anthropic.com/news/prompt-caching, https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html#prompt-caching-get-started
**Bedrock:**
```bash
# Bedrock region
region=us-west-2
# Bedrock equivalent of `max_tokens`
maxTokens=200
# Bedrock reasoning effort (for supported models like ZAI, MoonshotAI)
reasoning_effort=medium
```
**Assistants/Azure Assistants:**
```bash
# overrides existing assistant instructions for current run
instructions=your+instructions
```
```bash
# Adds the current date and time to `additional_instructions` for each run.
append_current_datetime=true
```
## More Info
For more information on any of the above, refer to [Model Spec Preset Fields](/docs/configuration/librechat_yaml/object_structure/model_specs), which shares most parameters.
**Example with multiple parameters:**
```bash
https://your-domain.com/c/new?endpoint=google&model=gemini-2.0-flash-exp&temperature=0.7&prompt=Oh hi mark
```
**Example with model spec:**
```bash
https://your-domain.com/c/new?spec=meeting-notes-gpt4&prompt=Here%20is%20the%20transcript...
```
Note: When using `spec`, other model parameters are ignored in favor of the model spec's configuration.
## ⚠️ Warning
Exercise caution when using query parameters:
- Misuse or exceeding provider limits may result in API errors
- If you encounter bad request errors, reset the conversation by clicking "New Chat"
- Some parameters may have no effect if they're not supported by the selected endpoint
## Best Practices
1. Always use both `endpoint` and `model` when possible
2. Verify parameter support for your chosen endpoint
3. Use reasonable values within provider limits
4. Test your parameter combinations before sharing URLs
## Parameter Validation
All parameters are validated against LibreChat's schema before being applied. Invalid parameters or values will be ignored, and valid settings will be applied to the conversation.
---
This feature enables powerful use cases like:
- Sharing specific conversation configurations
- Creating bookmarks for different chat settings
- Automating chat setup through URL parameters
---
#LibreChat #ChatConfiguration #AIParameters #OpenSource #URLQueryParameters