diff --git a/pages/docs/configuration/librechat_yaml/ai_endpoints/index.mdx b/pages/docs/configuration/librechat_yaml/ai_endpoints/index.mdx index 9dc06f0..414179e 100644 --- a/pages/docs/configuration/librechat_yaml/ai_endpoints/index.mdx +++ b/pages/docs/configuration/librechat_yaml/ai_endpoints/index.mdx @@ -3,11 +3,11 @@ title: AI Endpoints description: This section lists known, compatible AI Endpoints with example setups for the `librechat.yaml` AKA the LibreChat Custom Config file. --- -# Compatible AI Endpoints +# Custom AI Endpoints ## Intro -This section lists known, compatible AI Endpoints with example setups for the `librechat.yaml` file, also known as the [Custom Config](/docs/configuration/librechat_yaml) file. +This section lists known, compatible AI Endpoints, also known as "Custom Endpoints," with example setups for the `librechat.yaml` file, also known as the [Custom Config](/docs/configuration/librechat_yaml) file. In all of the examples, arbitrary environment variable names are defined but you can use any name you wish, as well as changing the value to `user_provided` to allow users to submit their own API key from the web UI. diff --git a/pages/docs/features/_meta.ts b/pages/docs/features/_meta.ts index 42206a1..9c164eb 100644 --- a/pages/docs/features/_meta.ts +++ b/pages/docs/features/_meta.ts @@ -2,6 +2,8 @@ export default { index: 'Overview', agents: 'Agents', code_interpreter: 'Code Interpreter API', + artifacts: 'Artifacts - Generative UI', // local_setup: 'Local Setup', // custom_endpoints: 'Custom Endpoints', + url_query: 'URL Query Parameters', } diff --git a/pages/docs/features/url_query.mdx b/pages/docs/features/url_query.mdx new file mode 100644 index 0000000..9a43dc8 --- /dev/null +++ b/pages/docs/features/url_query.mdx @@ -0,0 +1,196 @@ +--- +title: Query Parameters - Dynamic Chat Configuration via URL +description: Learn how to configure chat conversations using URL query parameters in LibreChat. Set models, endpoints, and conversation settings dynamically. +--- + +# URL Query Parameters + +LibreChat supports dynamic configuration of chat conversations through URL query parameters. This feature allows you to initiate conversations with specific settings, models, and endpoints directly from the URL. + +### Chat Paths + +Query parameters must follow a valid chat path: +- For new conversations: `/c/new?` +- For existing conversations: `/c/[conversation-id]?` (where conversation-id is an existing one) + +Examples: + +```bash +https://your-domain.com/c/new?endpoint=ollama&model=llama3%3Alatest +https://your-domain.com/c/03debefd-6a50-438a-904d-1a806f82aad4?endpoint=openAI&model=o1-mini +``` + +## Basic Usage + +The most common parameters to use are `endpoint` and `model`. Using both is recommended for the most predictable behavior: + +```bash +https://your-domain.com/c/new?endpoint=azureOpenAI&model=o1-mini +``` + +### Endpoint Selection + +The `endpoint` parameter can be used alone: +```bash +https://your-domain.com/c/new?endpoint=google +``` + +When only `endpoint` is specified: +- It will use the last selected model from localStorage +- If no previous model exists, it will use the first available model in the endpoint's model list + +#### Notes + +- The `endpoint` value must be one of the following: +```bash +openAI, azureOpenAI, google, anthropic, assistants, azureAssistants, bedrock, agents +``` +- If using a [custom endpoint](/docs/quick_start/custom_endpoints), you can use its name as the value (case-insensitive) + +```bash +# using `endpoint=perplexity` for a custom endpoint named `Perplexity` +https://your-domain.com/c/new?endpoint=perplexity&model=llama-3.1-sonar-small-128k-online +``` + +### Model Selection + +The `model` parameter can be used alone: +```bash +https://your-domain.com/c/new?model=gpt-4o +``` + +When only `model` is specified: +- It will only select the model if it's available in the current endpoint +- The current endpoint is either the default endpoint or the last selected endpoint + +### Prompt Parameter + +The `prompt` parameter allows you to pre-populate the chat input field: +```bash +https://your-domain.com/c/new?prompt=Explain quantum computing +``` + +You can combine this with other parameters: +```bash +https://your-domain.com/c/new?endpoint=anthropic&model=claude-3-5-sonnet-20241022&prompt=Explain quantum computing +``` + +#### URL Encoding + +Special characters in the prompt must be properly URL-encoded to work correctly. Common characters that need encoding: + +- `:` → `%3A` +- `/` → `%2F` +- `?` → `%3F` +- `#` → `%23` +- `&` → `%26` +- `=` → `%3D` +- `+` → `%2B` +- Space → `%20` (or `+`) + +Example with special characters: +``` +Original: Write a function: def hello() +Encoded URL: /c/new?prompt=Write%20a%20function%3A%20def%20hello() +``` + +You can use JavaScript's built-in `encodeURIComponent()` function to properly encode prompts: +```javascript +const prompt = "Write a function: def hello()"; +const encodedPrompt = encodeURIComponent(prompt); +const url = `/c/new?prompt=${encodedPrompt}`; +``` + +### Special Endpoints + +#### Agents +You can directly load an agent using its ID without specifying the endpoint: +```bash +https://your-domain.com/c/new?agent_id=your-agent-id +``` +This will automatically set the endpoint to `agents`. + +#### Assistants +Similarly, you can load an assistant directly: +```bash +https://your-domain.com/c/new?assistant_id=your-assistant-id +``` +This will automatically set the endpoint to `assistants`. + +## Supported Parameters + +LibreChat supports a wide range of parameters for fine-tuning your conversation settings: + +### LibreChat Settings +- `maxContextTokens`: Override the system-defined context window +- `resendFiles`: Control file resubmission in subsequent messages +- `promptPrefix`: Set custom instructions/system message + +### Model Parameters +Different endpoints support various parameters: + +**OpenAI, Custom, Azure OpenAI:** +```bash +# Note: these should be valid numbers according to the provider's API +temperature, presence_penalty, frequency_penalty, stop, top_p, max_tokens +``` + +**Google, Anthropic:** +```bash +# Note: these should be valid numbers according to the provider's API +topP, topK, maxOutputTokens +``` + +**Anthropic Specific:** + +Set this to `true` or `false` to toggle the "prompt-caching": +```bash +promptCache +``` + +More info: https://www.anthropic.com/news/prompt-caching + +**Bedrock:** +```bash +region, maxTokens +``` + +**Assistants/Azure Assistants:** +```bash +# overrides existing assistant instructions for current run +instructions +``` + +Example with multiple parameters: +```bash +https://your-domain.com/c/new?endpoint=google&model=gemini-2.0-flash-exp&temperature=0.7&prompt=Oh hi mark +``` + +## ⚠️ Warning + +Exercise caution when using query parameters: +- Misuse or exceeding provider limits may result in API errors +- If you encounter bad request errors, reset the conversation by clicking "New Chat" +- Some parameters may have no effect if they're not supported by the selected endpoint + +## Best Practices + +1. Always use both `endpoint` and `model` when possible +2. Verify parameter support for your chosen endpoint +3. Use reasonable values within provider limits +4. Test your parameter combinations before sharing URLs + +## Parameter Validation + +All parameters are validated against LibreChat's schema before being applied. Invalid parameters or values will be ignored, and valid settings will be applied to the conversation. + +--- + +This feature enables powerful use cases like: +- Sharing specific conversation configurations +- Creating bookmarks for different chat settings +- Automating chat setup through URL parameters + +--- + +#LibreChat #ChatConfiguration #AIParameters #OpenSource #URLQueryParameters \ No newline at end of file