mirror of
https://github.com/langgenius/dify-docs.git
synced 2026-03-27 13:28:32 +07:00
* move files & renames * rename files and doc entries * sync develop plugin files * update group label translations * some cleanups * update configs * update links * add remote debug doc * delete redundant slashes and unnecessary notes * update ja and zh links --------- Co-authored-by: Riskey <riskey47@dify.ai>
295 lines
8.7 KiB
Plaintext
295 lines
8.7 KiB
Plaintext
---
|
|
dimensions:
|
|
type:
|
|
primary: implementation
|
|
detail: advanced
|
|
level: intermediate
|
|
standard_title: Reverse Invocation Model
|
|
language: en
|
|
title: 反向调用模型
|
|
description: 本文档详细介绍了插件如何在 Dify 平台内反向调用模型服务。涵盖了反向调用 LLM、Summary、TextEmbedding、Rerank、TTS、Speech2Text 和 Moderation 模型的具体方法。每个模型调用都包括其入口点、接口参数说明、实际使用代码示例以及调用模型的最佳实践建议。
|
|
---
|
|
|
|
<Note> ⚠️ 本文档由 AI 自动翻译。如有任何不准确之处,请参考[英文原版](/en/develop-plugin/features-and-specs/advanced-development/reverse-invocation-model)。</Note>
|
|
|
|
反向调用模型是指插件能够调用 Dify 内部的 LLM 能力,包括平台内的所有模型类型和功能,如 TTS、Rerank 等。如果您不熟悉反向调用的基本概念,请先阅读[反向调用 Dify 服务](/zh/develop-plugin/features-and-specs/advanced-development/reverse-invocation)。
|
|
|
|
但请注意,调用模型需要传递 `ModelConfig` 类型的参数。其结构可以参考[通用规范定义](/zh/develop-plugin/features-and-specs/plugin-types/general-specifications),并且该结构对于不同类型的模型会有细微差异。
|
|
|
|
例如,对于 `LLM` 类型的模型,还需要包含 `completion_params` 和 `mode` 参数。您可以手动构建此结构,或使用 `model-selector` 类型的参数或配置。
|
|
|
|
### 调用 LLM
|
|
|
|
#### **入口点**
|
|
|
|
```python
|
|
self.session.model.llm
|
|
```
|
|
|
|
#### **接口**
|
|
|
|
```python
|
|
def invoke(
|
|
self,
|
|
model_config: LLMModelConfig,
|
|
prompt_messages: list[PromptMessage],
|
|
tools: list[PromptMessageTool] | None = None,
|
|
stop: list[str] | None = None,
|
|
stream: bool = True,
|
|
) -> Generator[LLMResultChunk, None, None] | LLMResult:
|
|
pass
|
|
```
|
|
|
|
请注意,如果您调用的模型不具备 `tool_call` 能力,这里传递的 `tools` 将不会生效。
|
|
|
|
#### **使用示例**
|
|
|
|
如果您想在 `Tool` 中调用 OpenAI 的 `gpt-4o-mini` 模型,请参考以下示例代码:
|
|
|
|
```python
|
|
from collections.abc import Generator
|
|
from typing import Any
|
|
|
|
from dify_plugin import Tool
|
|
from dify_plugin.entities.model.llm import LLMModelConfig
|
|
from dify_plugin.entities.tool import ToolInvokeMessage
|
|
from dify_plugin.entities.model.message import SystemPromptMessage, UserPromptMessage
|
|
|
|
class LLMTool(Tool):
|
|
def _invoke(self, tool_parameters: dict[str, Any]) -> Generator[ToolInvokeMessage]:
|
|
response = self.session.model.llm.invoke(
|
|
model_config=LLMModelConfig(
|
|
provider='openai',
|
|
model='gpt-4o-mini',
|
|
mode='chat',
|
|
completion_params={}
|
|
),
|
|
prompt_messages=[
|
|
SystemPromptMessage(
|
|
content='you are a helpful assistant'
|
|
),
|
|
UserPromptMessage(
|
|
content=tool_parameters.get('query')
|
|
)
|
|
],
|
|
stream=True
|
|
)
|
|
|
|
for chunk in response:
|
|
if chunk.delta.message:
|
|
assert isinstance(chunk.delta.message.content, str)
|
|
yield self.create_text_message(text=chunk.delta.message.content)
|
|
```
|
|
|
|
请注意,代码中传入了 `tool_parameters` 中的 `query` 参数。
|
|
|
|
### **最佳实践**
|
|
|
|
不建议手动构建 `LLMModelConfig`。相反,应该允许用户在 UI 上选择他们想要使用的模型。在这种情况下,您可以通过添加 `model` 参数来修改工具的参数列表,如下所示:
|
|
|
|
```yaml
|
|
identity:
|
|
name: llm
|
|
author: Dify
|
|
label:
|
|
en_US: LLM
|
|
zh_Hans: LLM
|
|
pt_BR: LLM
|
|
description:
|
|
human:
|
|
en_US: A tool for invoking a large language model
|
|
zh_Hans: 用于调用大型语言模型的工具
|
|
pt_BR: A tool for invoking a large language model
|
|
llm: A tool for invoking a large language model
|
|
parameters:
|
|
- name: prompt
|
|
type: string
|
|
required: true
|
|
label:
|
|
en_US: Prompt string
|
|
zh_Hans: 提示字符串
|
|
pt_BR: Prompt string
|
|
human_description:
|
|
en_US: used for searching
|
|
zh_Hans: 用于搜索网页内容
|
|
pt_BR: used for searching
|
|
llm_description: key words for searching
|
|
form: llm
|
|
- name: model
|
|
type: model-selector
|
|
scope: llm
|
|
required: true
|
|
label:
|
|
en_US: Model
|
|
zh_Hans: 使用的模型
|
|
pt_BR: Model
|
|
human_description:
|
|
en_US: Model
|
|
zh_Hans: 使用的模型
|
|
pt_BR: Model
|
|
llm_description: which Model to invoke
|
|
form: form
|
|
extra:
|
|
python:
|
|
source: tools/llm.py
|
|
```
|
|
|
|
请注意,在此示例中,`model` 的 `scope` 被指定为 `llm`。这意味着用户只能选择 `llm` 类型的参数。因此,前面使用示例中的代码可以修改如下:
|
|
|
|
```python
|
|
from collections.abc import Generator
|
|
from typing import Any
|
|
|
|
from dify_plugin import Tool
|
|
from dify_plugin.entities.model.llm import LLMModelConfig
|
|
from dify_plugin.entities.tool import ToolInvokeMessage
|
|
from dify_plugin.entities.model.message import SystemPromptMessage, UserPromptMessage
|
|
|
|
class LLMTool(Tool):
|
|
def _invoke(self, tool_parameters: dict[str, Any]) -> Generator[ToolInvokeMessage]:
|
|
response = self.session.model.llm.invoke(
|
|
model_config=tool_parameters.get('model'),
|
|
prompt_messages=[
|
|
SystemPromptMessage(
|
|
content='you are a helpful assistant'
|
|
),
|
|
UserPromptMessage(
|
|
content=tool_parameters.get('query') # Assuming 'query' is still needed, otherwise use 'prompt' from parameters
|
|
)
|
|
],
|
|
stream=True
|
|
)
|
|
|
|
for chunk in response:
|
|
if chunk.delta.message:
|
|
assert isinstance(chunk.delta.message.content, str)
|
|
yield self.create_text_message(text=chunk.delta.message.content)
|
|
```
|
|
|
|
### 调用 Summary
|
|
|
|
您可以请求此接口来总结一段文本。它将使用当前工作空间内的系统模型来总结文本。
|
|
|
|
**入口点**
|
|
|
|
```python
|
|
self.session.model.summary
|
|
```
|
|
|
|
**接口**
|
|
|
|
* `text` 是要总结的文本。
|
|
* `instruction` 是您想要添加的额外指令,允许您按照特定风格总结文本。
|
|
|
|
```python
|
|
def invoke(
|
|
self, text: str, instruction: str,
|
|
) -> str:
|
|
```
|
|
|
|
### 调用 TextEmbedding
|
|
|
|
**入口点**
|
|
|
|
```python
|
|
self.session.model.text_embedding
|
|
```
|
|
|
|
**接口**
|
|
|
|
```python
|
|
def invoke(
|
|
self, model_config: TextEmbeddingResult, texts: list[str]
|
|
) -> TextEmbeddingResult:
|
|
pass
|
|
```
|
|
|
|
### 调用 Rerank
|
|
|
|
**入口点**
|
|
|
|
```python
|
|
self.session.model.rerank
|
|
```
|
|
|
|
**接口**
|
|
|
|
```python
|
|
def invoke(
|
|
self, model_config: RerankModelConfig, docs: list[str], query: str
|
|
) -> RerankResult:
|
|
pass
|
|
```
|
|
|
|
### 调用 TTS
|
|
|
|
**入口点**
|
|
|
|
```python
|
|
self.session.model.tts
|
|
```
|
|
|
|
**接口**
|
|
|
|
```python
|
|
def invoke(
|
|
self, model_config: TTSModelConfig, content_text: str
|
|
) -> Generator[bytes, None, None]:
|
|
pass
|
|
```
|
|
|
|
请注意,`tts` 接口返回的 `bytes` 流是 `mp3` 音频字节流。每次迭代返回一个完整的音频片段。如果您想要进行更深入的处理任务,请选择合适的库。
|
|
|
|
### 调用 Speech2Text
|
|
|
|
**入口点**
|
|
|
|
```python
|
|
self.session.model.speech2text
|
|
```
|
|
|
|
**接口**
|
|
|
|
```python
|
|
def invoke(
|
|
self, model_config: Speech2TextModelConfig, file: IO[bytes]
|
|
) -> str:
|
|
pass
|
|
```
|
|
|
|
其中 `file` 是以 `mp3` 格式编码的音频文件。
|
|
|
|
### 调用 Moderation
|
|
|
|
**入口点**
|
|
|
|
```python
|
|
self.session.model.moderation
|
|
```
|
|
|
|
**接口**
|
|
|
|
```python
|
|
def invoke(self, model_config: ModerationModelConfig, text: str) -> bool:
|
|
pass
|
|
```
|
|
|
|
如果此接口返回 `true`,表示 `text` 包含敏感内容。
|
|
|
|
## 相关资源
|
|
|
|
- [反向调用 Dify 服务](/zh/develop-plugin/features-and-specs/advanced-development/reverse-invocation) - 了解反向调用的基本概念
|
|
- [反向调用应用](/zh/develop-plugin/features-and-specs/advanced-development/reverse-invocation-app) - 了解如何调用平台内的应用
|
|
- [反向调用工具](/zh/develop-plugin/features-and-specs/advanced-development/reverse-invocation-tool) - 了解如何调用其他插件
|
|
- [模型插件开发指南](/zh/develop-plugin/dev-guides-and-walkthroughs/creating-new-model-provider) - 了解如何开发自定义模型插件
|
|
- [模型设计规则](/zh/develop-plugin/features-and-specs/plugin-types/model-designing-rules) - 了解模型插件的设计原则
|
|
|
|
{/*
|
|
Contributing Section
|
|
DO NOT edit this section!
|
|
It will be automatically generated by the script.
|
|
*/}
|
|
|
|
---
|
|
|
|
[编辑此页面](https://github.com/langgenius/dify-docs/edit/main/en/develop-plugin/features-and-specs/advanced-development/reverse-invocation-model.mdx) | [报告问题](https://github.com/langgenius/dify-docs/issues/new?template=docs.yml) |