mirror of
https://github.com/LibreChat-AI/librechat.ai.git
synced 2026-03-27 10:48:32 +07:00
feat: update promptCache documentation to include bedrock (anthropic models) (#442)
This commit is contained in:
@@ -512,7 +512,8 @@ preset:
|
||||
|
||||
> **Note:** Each parameter below includes a note on which endpoints support it.
|
||||
> **OpenAI / AzureOpenAI / Custom** typically support `temperature`, `presence_penalty`, `frequency_penalty`, `stop`, `top_p`, `max_tokens`.
|
||||
> **Google / Anthropic** typically support `topP`, `topK`, `maxOutputTokens`, `promptCache` (Anthropic only).
|
||||
> **Google / Anthropic** typically support `topP`, `topK`, `maxOutputTokens`.
|
||||
> **Anthropic / Bedrock (Anthropic models)** support `promptCache`.
|
||||
> **Bedrock** supports `region`, `maxTokens`, and a few others.
|
||||
|
||||
#### model
|
||||
@@ -709,7 +710,7 @@ preset:
|
||||
|
||||
#### promptCache
|
||||
|
||||
> **Supported by:** `anthropic`
|
||||
> **Supported by:** `anthropic`, `bedrock` (Anthropic models)
|
||||
> (Toggle Anthropic’s “prompt-caching” feature)
|
||||
|
||||
<OptionTable
|
||||
|
||||
@@ -182,14 +182,14 @@ reasoning_effort, reasoning_summary, verbosity, useResponsesApi, web_search, dis
|
||||
topP, topK, maxOutputTokens, thinking, thinkingBudget, web_search
|
||||
```
|
||||
|
||||
**Anthropic Specific:**
|
||||
**Anthropic, Bedrock (Anthropic models):**
|
||||
|
||||
Set this to `true` or `false` to toggle the "prompt-caching":
|
||||
```bash
|
||||
promptCache
|
||||
```
|
||||
|
||||
More info: https://www.anthropic.com/news/prompt-caching
|
||||
More info: https://www.anthropic.com/news/prompt-caching, https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html#prompt-caching-get-started
|
||||
|
||||
**Bedrock:**
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user