mirror of
https://github.com/langgenius/dify-docs.git
synced 2026-03-27 13:28:32 +07:00
* Update build-an-notion-ai-assistant.mdx * Docs tools: Successfully completed 3 operations
77 lines
2.9 KiB
Plaintext
77 lines
2.9 KiB
Plaintext
---
|
|
title: "Integration with TrueFoundry AI Gateway"
|
|
---
|
|
|
|
TrueFoundry provides an enterprise-ready [AI Gateway](https://www.truefoundry.com/ai-gateway) and integrates seamlessly with Dify, providing enterprise-grade AI features including cost tracking, security guardrails, and access controls.
|
|
|
|
TrueFoundry's AI Gateway routes all LLM calls through the Gateway to ensure your AI applications are secure, compliant, and cost-effective.
|
|
|
|
## Prerequisites
|
|
|
|
Before integrating Dify with TrueFoundry, ensure you have:
|
|
|
|
1. **TrueFoundry Account**: Create a [Truefoundry account](https://www.truefoundry.com/register) and follow the instructions in our [Gateway Quick Start Guide](https://docs.truefoundry.com/gateway/quick-start)
|
|
2. **Dify Installation**: Set up Dify using either the [cloud version](https://dify.ai/) or [self-hosted deployment](https://github.com/langgenius/dify) with Docker
|
|
|
|
## Integration Steps
|
|
|
|
This guide assumes you have Dify installed and running, and have obtained your TrueFoundry AI Gateway base URL and authentication token.
|
|
|
|
|
|
### Step 1: Access Dify Model Provider Settings
|
|
|
|
1. Log into your Dify workspace (cloud or self-hosted).
|
|
|
|
2. Navigate to **Settings** and go to **Model Provider**:
|
|
|
|
<Frame>
|
|
<img src="/images/model provider select diffy.png" />
|
|
</Frame>
|
|
|
|
### Step 2: Install OpenAI-API-Compatible Provider
|
|
|
|
1. In the Model Provider section, look for **OpenAI-API-compatible** and click **Install**.
|
|
|
|
2. Configure the OpenAI-API-compatible provider with your TrueFoundry details:
|
|
|
|
<Frame>
|
|
<img src="/images/open api diffy.png" />
|
|
</Frame>
|
|
|
|
Fill in the following configuration:
|
|
- **Model Name**: Enter your TrueFoundry model ID (e.g., `openai-main/gpt-4o`)
|
|
- **Model display name**: Enter a display name (e.g., `Gpt-4o`)
|
|
- **API Key**: Enter your TrueFoundry API Key
|
|
- **API endpoint URL**: Enter your TrueFoundry Gateway base URL (e.g., `https://internal.devtest.truefoundry.tech/api/llm/api/inference/openai`)
|
|
- **model name for API endpoint**: Enter the endpoint model name (e.g., `openai-main/gpt-4o`)
|
|
|
|
<Frame>
|
|
<img src="/images/new-code-snippet.png" />
|
|
</Frame>
|
|
|
|
|
|
### Step 3: Save and Test Your Configuration
|
|
|
|
1. Click **Save** to apply your configuration in Dify.
|
|
|
|
2. Create a new application or workflow to test the integration:
|
|
|
|
3. Test the integration by creating a simple LLM workflow to verify that Dify is successfully communicating with TrueFoundry's AI Gateway.
|
|
|
|
<Frame>
|
|
<img src="/images/result diffy.png" />
|
|
</Frame>
|
|
|
|
Your Dify workspace is now integrated with TrueFoundry's AI Gateway and ready for building AI applications, workflows, and agents.
|
|
|
|
{/*
|
|
Contributing Section
|
|
DO NOT edit this section!
|
|
It will be automatically generated by the script.
|
|
*/}
|
|
|
|
---
|
|
|
|
[Edit this page](https://github.com/langgenius/dify-docs/edit/main/en/development/models-integration/truefoundry.mdx) | [Report an issue](https://github.com/langgenius/dify-docs/issues/new?template=docs.yml)
|
|
|