--- title: "AWS Bedrock" icon: Bot --- Head to the [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html) to sign up for AWS and setup your credentials. You’ll also need to turn on model access for your account, which you can do by [following these instructions](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html). ## Authentication - You will need to set the following environment variables: ```bash filename=".env" BEDROCK_AWS_DEFAULT_REGION=us-east-1 BEDROCK_AWS_ACCESS_KEY_ID=your_access_key_id BEDROCK_AWS_SECRET_ACCESS_KEY=your_secret_access_key ``` **Note:** You can also omit the access keys in order to use the default AWS credentials chain but you must set the default region: ```bash filename=".env" BEDROCK_AWS_DEFAULT_REGION=us-east-1 ``` Doing so prompts the credential provider to find credentials from the following sources (listed in order of precedence): - Environment variables exposed via process.env - SSO credentials from token cache - Web identity token credentials - Shared credentials and config ini files - The EC2/ECS Instance Metadata Service The default credential provider will invoke one provider at a time and only continue to the next if no credentials have been located. For example, if the process finds values defined via the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables, the files at ~/.aws/credentials and ~/.aws/config will not be read, nor will any messages be sent to the Instance Metadata Service. ## Configuring models - You can optionally specify which models you want to make available with `BEDROCK_AWS_MODELS`: ```bash filename=".env" BEDROCK_AWS_MODELS=anthropic.claude-3-5-sonnet-20240620-v1:0,meta.llama3-1-8b-instruct-v1:0 ``` Note: If omitted, all known, supported model IDs will be included automatically. - See all Bedrock model IDs here: - **[https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns)** ## Additional Configuration You can further configure the Bedrock endpoint in your [`librechat.yaml`](/docs/configuration/librechat_yaml/) file: ```yaml endpoints: bedrock: availableRegions: - "us-east-1" - "us-west-2" streamRate: 35 titleModel: 'anthropic.claude-3-haiku-20240307-v1:0' guardrailConfig: guardrailIdentifier: "abc123xyz" guardrailVersion: "1" trace: "enabled" ``` - `streamRate`: (Optional) Set the rate of processing each new token in milliseconds. - This can help stabilize processing of concurrent requests and provide smoother frontend stream rendering. - `titleModel`: (Optional) Specify the model to use for generating conversation titles. - Recommended: `anthropic.claude-3-haiku-20240307-v1:0`. - Omit or set as `current_model` to use the same model as the chat. - `availableRegions`: (Optional) Specify the AWS regions you want to make available. - If provided, users will see a dropdown to select the region. If not selected, the default region is used. - ![image](https://github.com/user-attachments/assets/6f3c5e82-9c6b-4643-8487-07db1061ba49) - `guardrailConfig`: (Optional) Configure AWS Bedrock Guardrails for content filtering. - `guardrailIdentifier`: The guardrail ID or ARN from your AWS Bedrock Console. - `guardrailVersion`: The guardrail version number (e.g., `"1"`) or `"DRAFT"`. - `trace`: (Optional) Enable trace logging: `"enabled"`, `"disabled"`, or `"enabled_full"`. - See [AWS Bedrock Guardrails documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-how.html) for creating and managing guardrails. ## Inference Profiles AWS Bedrock inference profiles let you create custom routing configurations for foundation models, enabling cross-region load balancing, cost allocation, and compliance controls. You can map model IDs to custom inference profile ARNs in your `librechat.yaml`: ```yaml endpoints: bedrock: inferenceProfiles: "us.anthropic.claude-3-7-sonnet-20250219-v1:0": "${BEDROCK_CLAUDE_37_PROFILE}" ``` For the full guide on creating profiles, configuring LibreChat, setting up logging, and troubleshooting, see **[Bedrock Inference Profiles](/docs/configuration/pre_configured_ai/bedrock_inference_profiles)**. For the YAML field reference, see **[AWS Bedrock Object Structure](/docs/configuration/librechat_yaml/object_structure/aws_bedrock#inferenceprofiles)**. ## Notes - The following models are not supported due to lack of streaming capability: - ai21.j2-mid-v1 - The following models are not supported due to lack of conversation history support: - ai21.j2-ultra-v1 - cohere.command-text-v14 - cohere.command-light-text-v14 - AWS Bedrock endpoint supports all [Shared Endpoint Settings](/docs/configuration/librechat_yaml/object_structure/shared_endpoint_settings) via the `librechat.yaml` configuration file, including `streamRate`, `titleModel`, `titleMethod`, `titlePrompt`, `titlePromptTemplate`, and `titleEndpoint`