diff --git a/conversion.log b/conversion.log index be162dd9..a63c1ca7 100644 --- a/conversion.log +++ b/conversion.log @@ -437,3 +437,853 @@ 2025-03-19 18:15:13,468 - md-to-mdx - INFO - 转换完成: zh-hans/policies/output/agreement/README.mdx 2025-03-19 18:15:13,468 - md-to-mdx - INFO - 处理文件: zh-hans/policies/agreement/get-compliance-report.md 2025-03-19 18:15:13,469 - md-to-mdx - INFO - 转换完成: zh-hans/policies/output/agreement/get-compliance-report.mdx +2025-03-20 16:38:40,057 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/predefined-model.md +2025-03-20 16:38:40,062 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/predefined-model.mdx +2025-03-20 16:38:40,063 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/predefined-model.md +2025-03-20 16:38:40,063 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/schema.md +2025-03-20 16:38:40,065 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/schema.mdx +2025-03-20 16:38:40,067 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/schema.md +2025-03-20 16:38:40,068 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/customizable-model.md +2025-03-20 16:38:40,070 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/customizable-model.mdx +2025-03-20 16:38:40,070 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/customizable-model.md +2025-03-20 16:38:40,070 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/README.md +2025-03-20 16:38:40,070 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/README.mdx +2025-03-20 16:38:40,071 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/README.md +2025-03-20 16:38:40,071 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/interfaces.md +2025-03-20 16:38:40,071 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/interfaces.mdx +2025-03-20 16:38:40,072 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/interfaces.md +2025-03-20 16:38:40,072 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/new-provider.md +2025-03-20 16:38:40,072 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/new-provider.mdx +2025-03-20 16:38:40,073 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/new-provider.md +2025-03-20 16:38:40,073 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/load-balancing.md +2025-03-20 16:38:40,073 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/load-balancing.mdx +2025-03-20 16:38:40,073 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/load-balancing.md +2025-03-20 16:50:11,059 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/predefined-model.md +2025-03-20 16:50:11,066 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/predefined-model.mdx +2025-03-20 16:50:11,067 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/predefined-model.md +2025-03-20 16:50:11,067 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/schema.md +2025-03-20 16:50:11,070 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/schema.mdx +2025-03-20 16:50:11,072 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/schema.md +2025-03-20 16:50:11,073 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/customizable-model.md +2025-03-20 16:50:11,074 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/customizable-model.mdx +2025-03-20 16:50:11,075 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/customizable-model.md +2025-03-20 16:50:11,075 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/README.md +2025-03-20 16:50:11,075 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/README.mdx +2025-03-20 16:50:11,076 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/README.md +2025-03-20 16:50:11,076 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/interfaces.md +2025-03-20 16:50:11,076 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/interfaces.mdx +2025-03-20 16:50:11,076 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/interfaces.md +2025-03-20 16:50:11,077 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/new-provider.md +2025-03-20 16:50:11,077 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/new-provider.mdx +2025-03-20 16:50:11,077 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/new-provider.md +2025-03-20 16:50:11,077 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/load-balancing.md +2025-03-20 16:50:11,078 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/load-balancing.mdx +2025-03-20 16:50:11,078 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/load-balancing.md +2025-03-20 16:51:00,865 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/predefined-model.md +2025-03-20 16:51:00,872 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/predefined-model.mdx +2025-03-20 16:51:00,872 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/predefined-model.md +2025-03-20 16:51:00,872 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/schema.md +2025-03-20 16:51:00,873 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/schema.mdx +2025-03-20 16:51:00,874 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/schema.md +2025-03-20 16:51:00,874 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/customizable-model.md +2025-03-20 16:51:00,878 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/customizable-model.mdx +2025-03-20 16:51:00,878 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/customizable-model.md +2025-03-20 16:51:00,878 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/README.md +2025-03-20 16:51:00,880 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/README.mdx +2025-03-20 16:51:00,880 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/README.md +2025-03-20 16:51:00,880 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/interfaces.md +2025-03-20 16:51:00,881 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/interfaces.mdx +2025-03-20 16:51:00,881 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/interfaces.md +2025-03-20 16:51:00,881 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/new-provider.md +2025-03-20 16:51:00,882 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/new-provider.mdx +2025-03-20 16:51:00,882 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/new-provider.md +2025-03-20 16:51:00,882 - md-to-mdx - INFO - 处理文件: en/guides/model-configuration/load-balancing.md +2025-03-20 16:51:00,882 - md-to-mdx - INFO - 转换完成: en/guides/model-configuration/load-balancing.mdx +2025-03-20 16:51:00,882 - md-to-mdx - INFO - 已删除源文件: en/guides/model-configuration/load-balancing.md +2025-03-21 10:57:59,103 - md-to-mdx - INFO - 处理文件: en/workshop/basic/build-ai-image-generation-app.md +2025-03-21 10:57:59,110 - md-to-mdx - INFO - 转换完成: en/workshop/basic/build-ai-image-generation-app.mdx +2025-03-21 10:57:59,111 - md-to-mdx - INFO - 已删除源文件: en/workshop/basic/build-ai-image-generation-app.md +2025-03-21 10:57:59,111 - md-to-mdx - INFO - 处理文件: en/workshop/basic/travel-assistant.md +2025-03-21 10:57:59,114 - md-to-mdx - INFO - 转换完成: en/workshop/basic/travel-assistant.mdx +2025-03-21 10:57:59,114 - md-to-mdx - INFO - 已删除源文件: en/workshop/basic/travel-assistant.md +2025-03-21 10:57:59,114 - md-to-mdx - INFO - 处理文件: en/workshop/basic/README.md +2025-03-21 10:57:59,115 - md-to-mdx - INFO - 转换完成: en/workshop/basic/README.mdx +2025-03-21 10:57:59,115 - md-to-mdx - INFO - 已删除源文件: en/workshop/basic/README.md +2025-03-21 10:57:59,116 - md-to-mdx - INFO - 处理文件: en/workshop/intermediate/twitter-chatflow.md +2025-03-21 10:57:59,118 - md-to-mdx - INFO - 转换完成: en/workshop/intermediate/twitter-chatflow.mdx +2025-03-21 10:57:59,118 - md-to-mdx - INFO - 已删除源文件: en/workshop/intermediate/twitter-chatflow.md +2025-03-21 10:57:59,118 - md-to-mdx - INFO - 处理文件: en/workshop/intermediate/customer-service-bot.md +2025-03-21 10:57:59,121 - md-to-mdx - INFO - 转换完成: en/workshop/intermediate/customer-service-bot.mdx +2025-03-21 10:57:59,121 - md-to-mdx - INFO - 已删除源文件: en/workshop/intermediate/customer-service-bot.md +2025-03-21 10:57:59,121 - md-to-mdx - INFO - 处理文件: en/workshop/intermediate/README.md +2025-03-21 10:57:59,122 - md-to-mdx - INFO - 转换完成: en/workshop/intermediate/README.mdx +2025-03-21 10:57:59,122 - md-to-mdx - INFO - 已删除源文件: en/workshop/intermediate/README.md +2025-03-21 10:57:59,122 - md-to-mdx - INFO - 处理文件: en/workshop/intermediate/article-reader.md +2025-03-21 10:57:59,122 - md-to-mdx - INFO - 转换完成: en/workshop/intermediate/article-reader.mdx +2025-03-21 10:57:59,123 - md-to-mdx - INFO - 已删除源文件: en/workshop/intermediate/article-reader.md +2025-03-21 10:59:25,125 - md-to-mdx - INFO - 处理文件: en/community/contribution.md +2025-03-21 10:59:25,126 - md-to-mdx - INFO - 转换完成: en/community/contribution.mdx +2025-03-21 10:59:25,126 - md-to-mdx - INFO - 已删除源文件: en/community/contribution.md +2025-03-21 10:59:25,126 - md-to-mdx - INFO - 处理文件: en/community/docs-contribution.md +2025-03-21 10:59:25,126 - md-to-mdx - INFO - 转换完成: en/community/docs-contribution.mdx +2025-03-21 10:59:25,127 - md-to-mdx - INFO - 已删除源文件: en/community/docs-contribution.md +2025-03-21 10:59:25,127 - md-to-mdx - INFO - 处理文件: en/community/support.md +2025-03-21 10:59:25,127 - md-to-mdx - INFO - 转换完成: en/community/support.mdx +2025-03-21 10:59:25,127 - md-to-mdx - INFO - 已删除源文件: en/community/support.md +2025-03-21 11:00:11,378 - md-to-mdx - INFO - 处理文件: en/plugins/faq.md +2025-03-21 11:00:11,381 - md-to-mdx - INFO - 转换完成: en/plugins/faq.mdx +2025-03-21 11:00:11,381 - md-to-mdx - INFO - 已删除源文件: en/plugins/faq.md +2025-03-21 11:00:11,381 - md-to-mdx - INFO - 处理文件: en/plugins/introduction.md +2025-03-21 11:00:11,382 - md-to-mdx - INFO - 转换完成: en/plugins/introduction.mdx +2025-03-21 11:00:11,382 - md-to-mdx - INFO - 已删除源文件: en/plugins/introduction.md +2025-03-21 11:00:11,382 - md-to-mdx - INFO - 处理文件: en/plugins/manage-plugins.md +2025-03-21 11:00:11,382 - md-to-mdx - INFO - 转换完成: en/plugins/manage-plugins.mdx +2025-03-21 11:00:11,383 - md-to-mdx - INFO - 已删除源文件: en/plugins/manage-plugins.md +2025-03-21 11:00:11,383 - md-to-mdx - INFO - 处理文件: en/plugins/publish-plugins/publish-plugin-on-personal-github-repo.md +2025-03-21 11:00:11,383 - md-to-mdx - INFO - 转换完成: en/plugins/publish-plugins/publish-plugin-on-personal-github-repo.mdx +2025-03-21 11:00:11,383 - md-to-mdx - INFO - 已删除源文件: en/plugins/publish-plugins/publish-plugin-on-personal-github-repo.md +2025-03-21 11:00:11,383 - md-to-mdx - INFO - 处理文件: en/plugins/publish-plugins/README.md +2025-03-21 11:00:11,383 - md-to-mdx - INFO - 转换完成: en/plugins/publish-plugins/README.mdx +2025-03-21 11:00:11,383 - md-to-mdx - INFO - 已删除源文件: en/plugins/publish-plugins/README.md +2025-03-21 11:00:11,383 - md-to-mdx - INFO - 处理文件: en/plugins/publish-plugins/package-plugin-file-and-publish.md +2025-03-21 11:00:11,383 - md-to-mdx - INFO - 转换完成: en/plugins/publish-plugins/package-plugin-file-and-publish.mdx +2025-03-21 11:00:11,384 - md-to-mdx - INFO - 已删除源文件: en/plugins/publish-plugins/package-plugin-file-and-publish.md +2025-03-21 11:00:11,384 - md-to-mdx - INFO - 处理文件: en/plugins/publish-plugins/publish-to-dify-marketplace/README.md +2025-03-21 11:00:11,384 - md-to-mdx - INFO - 转换完成: en/plugins/publish-plugins/publish-to-dify-marketplace/README.mdx +2025-03-21 11:00:11,384 - md-to-mdx - INFO - 已删除源文件: en/plugins/publish-plugins/publish-to-dify-marketplace/README.md +2025-03-21 11:00:11,384 - md-to-mdx - INFO - 处理文件: en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines.md +2025-03-21 11:00:11,384 - md-to-mdx - INFO - 转换完成: en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines.mdx +2025-03-21 11:00:11,384 - md-to-mdx - INFO - 已删除源文件: en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines.md +2025-03-21 11:00:11,384 - md-to-mdx - INFO - 处理文件: en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.md +2025-03-21 11:00:11,385 - md-to-mdx - INFO - 转换完成: en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.mdx +2025-03-21 11:00:11,385 - md-to-mdx - INFO - 已删除源文件: en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.md +2025-03-21 11:00:11,385 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/agent.md +2025-03-21 11:00:11,385 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/agent.mdx +2025-03-21 11:00:11,386 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/agent.md +2025-03-21 11:00:11,386 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/endpoint.md +2025-03-21 11:00:11,386 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/endpoint.mdx +2025-03-21 11:00:11,386 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/endpoint.md +2025-03-21 11:00:11,386 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/general-specifications.md +2025-03-21 11:00:11,386 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/general-specifications.mdx +2025-03-21 11:00:11,386 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/general-specifications.md +2025-03-21 11:00:11,386 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/tool.md +2025-03-21 11:00:11,387 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/tool.mdx +2025-03-21 11:00:11,387 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/tool.md +2025-03-21 11:00:11,387 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/persistent-storage.md +2025-03-21 11:00:11,387 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/persistent-storage.mdx +2025-03-21 11:00:11,387 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/persistent-storage.md +2025-03-21 11:00:11,387 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/README.md +2025-03-21 11:00:11,387 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/README.mdx +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/README.md +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/manifest.md +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/manifest.mdx +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/manifest.md +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/app.md +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/app.mdx +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/app.md +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/model.md +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/model.mdx +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/model.md +2025-03-21 11:00:11,388 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/tool.md +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/tool.mdx +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/tool.md +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/node.md +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/node.mdx +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/node.md +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/README.md +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/README.mdx +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/reverse-invocation-of-the-dify-service/README.md +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/model/model-schema.md +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/model/model-schema.mdx +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/model/model-schema.md +2025-03-21 11:00:11,389 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/model/README.md +2025-03-21 11:00:11,390 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/model/README.mdx +2025-03-21 11:00:11,390 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/model/README.md +2025-03-21 11:00:11,390 - md-to-mdx - INFO - 处理文件: en/plugins/schema-definition/model/model-designing-rules.md +2025-03-21 11:00:11,390 - md-to-mdx - INFO - 转换完成: en/plugins/schema-definition/model/model-designing-rules.mdx +2025-03-21 11:00:11,390 - md-to-mdx - INFO - 已删除源文件: en/plugins/schema-definition/model/model-designing-rules.md +2025-03-21 11:00:11,390 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/install-plugins.md +2025-03-21 11:00:11,391 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/install-plugins.mdx +2025-03-21 11:00:11,391 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/install-plugins.md +2025-03-21 11:00:11,391 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/README.md +2025-03-21 11:00:11,391 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/README.mdx +2025-03-21 11:00:11,391 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/README.md +2025-03-21 11:00:11,391 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/debug-plugin.md +2025-03-21 11:00:11,392 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/debug-plugin.mdx +2025-03-21 11:00:11,392 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/debug-plugin.md +2025-03-21 11:00:11,392 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/extension-plugin.md +2025-03-21 11:00:11,393 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/extension-plugin.mdx +2025-03-21 11:00:11,394 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/extension-plugin.md +2025-03-21 11:00:11,394 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/agent-strategy-plugin.md +2025-03-21 11:00:11,399 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/agent-strategy-plugin.mdx +2025-03-21 11:00:11,400 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/agent-strategy-plugin.md +2025-03-21 11:00:11,400 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/README.md +2025-03-21 11:00:11,400 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/README.mdx +2025-03-21 11:00:11,400 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/README.md +2025-03-21 11:00:11,400 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/tool-plugin.md +2025-03-21 11:00:11,403 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/tool-plugin.mdx +2025-03-21 11:00:11,403 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/tool-plugin.md +2025-03-21 11:00:11,403 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/bundle.md +2025-03-21 11:00:11,403 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/bundle.mdx +2025-03-21 11:00:11,403 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/bundle.md +2025-03-21 11:00:11,403 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/initialize-development-tools.md +2025-03-21 11:00:11,403 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/initialize-development-tools.mdx +2025-03-21 11:00:11,403 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/initialize-development-tools.md +2025-03-21 11:00:11,404 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/model-plugin/predefined-model.md +2025-03-21 11:00:11,404 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/model-plugin/predefined-model.mdx +2025-03-21 11:00:11,404 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/model-plugin/predefined-model.md +2025-03-21 11:00:11,404 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/model-plugin/create-model-providers.md +2025-03-21 11:00:11,405 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/model-plugin/create-model-providers.mdx +2025-03-21 11:00:11,405 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/model-plugin/create-model-providers.md +2025-03-21 11:00:11,405 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/model-plugin/customizable-model.md +2025-03-21 11:00:11,406 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/model-plugin/customizable-model.mdx +2025-03-21 11:00:11,406 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/model-plugin/customizable-model.md +2025-03-21 11:00:11,406 - md-to-mdx - INFO - 处理文件: en/plugins/quick-start/develop-plugins/model-plugin/README.md +2025-03-21 11:00:11,406 - md-to-mdx - INFO - 转换完成: en/plugins/quick-start/develop-plugins/model-plugin/README.mdx +2025-03-21 11:00:11,406 - md-to-mdx - INFO - 已删除源文件: en/plugins/quick-start/develop-plugins/model-plugin/README.md +2025-03-21 11:00:11,406 - md-to-mdx - INFO - 处理文件: en/plugins/best-practice/develop-a-slack-bot-plugin.md +2025-03-21 11:00:11,411 - md-to-mdx - INFO - 转换完成: en/plugins/best-practice/develop-a-slack-bot-plugin.mdx +2025-03-21 11:00:11,411 - md-to-mdx - INFO - 已删除源文件: en/plugins/best-practice/develop-a-slack-bot-plugin.md +2025-03-21 11:00:11,411 - md-to-mdx - INFO - 处理文件: en/plugins/best-practice/README.md +2025-03-21 11:00:11,411 - md-to-mdx - INFO - 转换完成: en/plugins/best-practice/README.mdx +2025-03-21 11:00:11,411 - md-to-mdx - INFO - 已删除源文件: en/plugins/best-practice/README.md +2025-03-21 11:27:27,046 - md-to-mdx - INFO - 处理文件: en/development/backend/README.md +2025-03-21 11:27:27,047 - md-to-mdx - INFO - 转换完成: en/development/backend/README.mdx +2025-03-21 11:27:27,048 - md-to-mdx - INFO - 已删除源文件: en/development/backend/README.md +2025-03-21 11:27:27,048 - md-to-mdx - INFO - 处理文件: en/development/backend/sandbox/contribution.md +2025-03-21 11:27:27,048 - md-to-mdx - INFO - 转换完成: en/development/backend/sandbox/contribution.mdx +2025-03-21 11:27:27,048 - md-to-mdx - INFO - 已删除源文件: en/development/backend/sandbox/contribution.md +2025-03-21 11:27:27,048 - md-to-mdx - INFO - 处理文件: en/development/backend/sandbox/README.md +2025-03-21 11:27:27,048 - md-to-mdx - INFO - 转换完成: en/development/backend/sandbox/README.mdx +2025-03-21 11:27:27,048 - md-to-mdx - INFO - 已删除源文件: en/development/backend/sandbox/README.md +2025-03-21 11:27:27,048 - md-to-mdx - INFO - 处理文件: en/development/models-integration/openllm.md +2025-03-21 11:27:27,048 - md-to-mdx - INFO - 转换完成: en/development/models-integration/openllm.mdx +2025-03-21 11:27:27,049 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/openllm.md +2025-03-21 11:27:27,049 - md-to-mdx - INFO - 处理文件: en/development/models-integration/xinference.md +2025-03-21 11:27:27,049 - md-to-mdx - INFO - 转换完成: en/development/models-integration/xinference.mdx +2025-03-21 11:27:27,049 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/xinference.md +2025-03-21 11:27:27,049 - md-to-mdx - INFO - 处理文件: en/development/models-integration/litellm.md +2025-03-21 11:27:27,049 - md-to-mdx - INFO - 转换完成: en/development/models-integration/litellm.mdx +2025-03-21 11:27:27,049 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/litellm.md +2025-03-21 11:27:27,049 - md-to-mdx - INFO - 处理文件: en/development/models-integration/ollama.md +2025-03-21 11:27:27,050 - md-to-mdx - INFO - 转换完成: en/development/models-integration/ollama.mdx +2025-03-21 11:27:27,050 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/ollama.md +2025-03-21 11:27:27,050 - md-to-mdx - INFO - 处理文件: en/development/models-integration/gpustack.md +2025-03-21 11:27:27,050 - md-to-mdx - INFO - 转换完成: en/development/models-integration/gpustack.mdx +2025-03-21 11:27:27,051 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/gpustack.md +2025-03-21 11:27:27,051 - md-to-mdx - INFO - 处理文件: en/development/models-integration/README.md +2025-03-21 11:27:27,051 - md-to-mdx - INFO - 转换完成: en/development/models-integration/README.mdx +2025-03-21 11:27:27,051 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/README.md +2025-03-21 11:27:27,051 - md-to-mdx - INFO - 处理文件: en/development/models-integration/aws-bedrock-deepseek.md +2025-03-21 11:27:27,052 - md-to-mdx - INFO - 转换完成: en/development/models-integration/aws-bedrock-deepseek.mdx +2025-03-21 11:27:27,052 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/aws-bedrock-deepseek.md +2025-03-21 11:27:27,052 - md-to-mdx - INFO - 处理文件: en/development/models-integration/hugging-face.md +2025-03-21 11:27:27,053 - md-to-mdx - INFO - 转换完成: en/development/models-integration/hugging-face.mdx +2025-03-21 11:27:27,053 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/hugging-face.md +2025-03-21 11:27:27,053 - md-to-mdx - INFO - 处理文件: en/development/models-integration/replicate.md +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 转换完成: en/development/models-integration/replicate.mdx +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/replicate.md +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 处理文件: en/development/models-integration/localai.md +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 转换完成: en/development/models-integration/localai.mdx +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 已删除源文件: en/development/models-integration/localai.md +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 处理文件: en/development/migration/README.md +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 转换完成: en/development/migration/README.mdx +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 已删除源文件: en/development/migration/README.md +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 处理文件: en/development/migration/migrate-to-v1.md +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 转换完成: en/development/migration/migrate-to-v1.mdx +2025-03-21 11:27:27,054 - md-to-mdx - INFO - 已删除源文件: en/development/migration/migrate-to-v1.md +2025-03-21 11:34:22,078 - md-to-mdx - INFO - 处理文件: en/learn-more/how-to-use-json-schema-in-dify.md +2025-03-21 11:34:22,080 - md-to-mdx - INFO - 转换完成: en/learn-more/how-to-use-json-schema-in-dify.mdx +2025-03-21 11:34:22,080 - md-to-mdx - INFO - 已删除源文件: en/learn-more/how-to-use-json-schema-in-dify.md +2025-03-21 11:34:22,080 - md-to-mdx - INFO - 处理文件: en/learn-more/faq/use-llms-faq.md +2025-03-21 11:34:22,080 - md-to-mdx - INFO - 转换完成: en/learn-more/faq/use-llms-faq.mdx +2025-03-21 11:34:22,080 - md-to-mdx - INFO - 已删除源文件: en/learn-more/faq/use-llms-faq.md +2025-03-21 11:34:22,080 - md-to-mdx - INFO - 处理文件: en/learn-more/faq/README.md +2025-03-21 11:34:22,080 - md-to-mdx - INFO - 转换完成: en/learn-more/faq/README.mdx +2025-03-21 11:34:22,081 - md-to-mdx - INFO - 已删除源文件: en/learn-more/faq/README.md +2025-03-21 11:34:22,081 - md-to-mdx - INFO - 处理文件: en/learn-more/faq/install-faq.md +2025-03-21 11:34:22,081 - md-to-mdx - INFO - 转换完成: en/learn-more/faq/install-faq.mdx +2025-03-21 11:34:22,081 - md-to-mdx - INFO - 已删除源文件: en/learn-more/faq/install-faq.md +2025-03-21 11:34:22,081 - md-to-mdx - INFO - 处理文件: en/learn-more/faq/plugins.md +2025-03-21 11:34:22,081 - md-to-mdx - INFO - 转换完成: en/learn-more/faq/plugins.mdx +2025-03-21 11:34:22,081 - md-to-mdx - INFO - 已删除源文件: en/learn-more/faq/plugins.md +2025-03-21 11:34:22,081 - md-to-mdx - INFO - 处理文件: en/learn-more/prompt-engineering/README.md +2025-03-21 11:34:22,082 - md-to-mdx - INFO - 转换完成: en/learn-more/prompt-engineering/README.mdx +2025-03-21 11:34:22,082 - md-to-mdx - INFO - 已删除源文件: en/learn-more/prompt-engineering/README.md +2025-03-21 11:34:22,082 - md-to-mdx - INFO - 处理文件: en/learn-more/prompt-engineering/prompt-engineering-1/prompt-engineering-template.md +2025-03-21 11:34:22,082 - md-to-mdx - INFO - 转换完成: en/learn-more/prompt-engineering/prompt-engineering-1/prompt-engineering-template.mdx +2025-03-21 11:34:22,082 - md-to-mdx - INFO - 已删除源文件: en/learn-more/prompt-engineering/prompt-engineering-1/prompt-engineering-template.md +2025-03-21 11:34:22,082 - md-to-mdx - INFO - 处理文件: en/learn-more/prompt-engineering/prompt-engineering-1/README.md +2025-03-21 11:34:22,083 - md-to-mdx - INFO - 转换完成: en/learn-more/prompt-engineering/prompt-engineering-1/README.mdx +2025-03-21 11:34:22,083 - md-to-mdx - INFO - 已删除源文件: en/learn-more/prompt-engineering/prompt-engineering-1/README.md +2025-03-21 11:34:22,083 - md-to-mdx - INFO - 处理文件: en/learn-more/extended-reading/how-to-use-json-schema-in-dify.md +2025-03-21 11:34:22,084 - md-to-mdx - INFO - 转换完成: en/learn-more/extended-reading/how-to-use-json-schema-in-dify.mdx +2025-03-21 11:34:22,084 - md-to-mdx - INFO - 已删除源文件: en/learn-more/extended-reading/how-to-use-json-schema-in-dify.md +2025-03-21 11:34:22,084 - md-to-mdx - INFO - 处理文件: en/learn-more/extended-reading/README.md +2025-03-21 11:34:22,084 - md-to-mdx - INFO - 转换完成: en/learn-more/extended-reading/README.mdx +2025-03-21 11:34:22,084 - md-to-mdx - INFO - 已删除源文件: en/learn-more/extended-reading/README.md +2025-03-21 11:34:22,084 - md-to-mdx - INFO - 处理文件: en/learn-more/extended-reading/what-is-llmops.md +2025-03-21 11:34:22,084 - md-to-mdx - INFO - 转换完成: en/learn-more/extended-reading/what-is-llmops.mdx +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 已删除源文件: en/learn-more/extended-reading/what-is-llmops.md +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 处理文件: en/learn-more/extended-reading/retrieval-augment/retrieval.md +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 转换完成: en/learn-more/extended-reading/retrieval-augment/retrieval.mdx +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 已删除源文件: en/learn-more/extended-reading/retrieval-augment/retrieval.md +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 处理文件: en/learn-more/extended-reading/retrieval-augment/README.md +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 转换完成: en/learn-more/extended-reading/retrieval-augment/README.mdx +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 已删除源文件: en/learn-more/extended-reading/retrieval-augment/README.md +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 处理文件: en/learn-more/extended-reading/retrieval-augment/rerank.md +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 转换完成: en/learn-more/extended-reading/retrieval-augment/rerank.mdx +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 已删除源文件: en/learn-more/extended-reading/retrieval-augment/rerank.md +2025-03-21 11:34:22,085 - md-to-mdx - INFO - 处理文件: en/learn-more/extended-reading/retrieval-augment/hybrid-search.md +2025-03-21 11:34:22,086 - md-to-mdx - INFO - 转换完成: en/learn-more/extended-reading/retrieval-augment/hybrid-search.mdx +2025-03-21 11:34:22,086 - md-to-mdx - INFO - 已删除源文件: en/learn-more/extended-reading/retrieval-augment/hybrid-search.md +2025-03-21 11:34:22,086 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/building-an-ai-thesis-slack-bot.md +2025-03-21 11:34:22,089 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/building-an-ai-thesis-slack-bot.mdx +2025-03-21 11:34:22,089 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/building-an-ai-thesis-slack-bot.md +2025-03-21 11:34:22,089 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/dify-schedule.md +2025-03-21 11:34:22,090 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/dify-schedule.mdx +2025-03-21 11:34:22,090 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/dify-schedule.md +2025-03-21 11:34:22,090 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/private-ai-ollama-deepseek-dify.md +2025-03-21 11:34:22,092 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/private-ai-ollama-deepseek-dify.mdx +2025-03-21 11:34:22,092 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/private-ai-ollama-deepseek-dify.md +2025-03-21 11:34:22,092 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.md +2025-03-21 11:34:22,092 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.mdx +2025-03-21 11:34:22,093 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.md +2025-03-21 11:34:22,093 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/build-an-notion-ai-assistant.md +2025-03-21 11:34:22,099 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/build-an-notion-ai-assistant.mdx +2025-03-21 11:34:22,099 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/build-an-notion-ai-assistant.md +2025-03-21 11:34:22,099 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/how-to-connect-aws-bedrock.md +2025-03-21 11:34:22,100 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/how-to-connect-aws-bedrock.mdx +2025-03-21 11:34:22,100 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/how-to-connect-aws-bedrock.md +2025-03-21 11:34:22,100 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/README.md +2025-03-21 11:34:22,100 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/README.mdx +2025-03-21 11:34:22,100 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/README.md +2025-03-21 11:34:22,100 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.md +2025-03-21 11:34:22,101 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.mdx +2025-03-21 11:34:22,101 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.md +2025-03-21 11:34:22,101 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md +2025-03-21 11:34:22,102 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.mdx +2025-03-21 11:34:22,102 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md +2025-03-21 11:34:22,102 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/dify-model-arena.md +2025-03-21 11:34:22,102 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/dify-model-arena.mdx +2025-03-21 11:34:22,102 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/dify-model-arena.md +2025-03-21 11:34:22,102 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.md +2025-03-21 11:34:22,103 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.mdx +2025-03-21 11:34:22,103 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.md +2025-03-21 11:34:22,103 - md-to-mdx - INFO - 处理文件: en/learn-more/use-cases/how-to-creat-dify-schedule.md +2025-03-21 11:34:22,103 - md-to-mdx - INFO - 转换完成: en/learn-more/use-cases/how-to-creat-dify-schedule.mdx +2025-03-21 11:34:22,104 - md-to-mdx - INFO - 已删除源文件: en/learn-more/use-cases/how-to-creat-dify-schedule.md +2025-03-21 11:37:18,213 - md-to-mdx - INFO - 处理文件: en/policies/open-source.md +2025-03-21 11:37:18,227 - md-to-mdx - INFO - 转换完成: en/policies/open-source.mdx +2025-03-21 11:37:18,228 - md-to-mdx - INFO - 已删除源文件: en/policies/open-source.md +2025-03-21 11:37:18,228 - md-to-mdx - INFO - 处理文件: en/policies/agreement/README.md +2025-03-21 11:37:18,229 - md-to-mdx - INFO - 转换完成: en/policies/agreement/README.mdx +2025-03-21 11:37:18,229 - md-to-mdx - INFO - 已删除源文件: en/policies/agreement/README.md +2025-03-21 11:37:18,229 - md-to-mdx - INFO - 处理文件: en/policies/agreement/get-compliance-report.md +2025-03-21 11:37:18,231 - md-to-mdx - INFO - 转换完成: en/policies/agreement/get-compliance-report.mdx +2025-03-21 11:37:18,231 - md-to-mdx - INFO - 已删除源文件: en/policies/agreement/get-compliance-report.md +2025-03-21 11:46:22,244 - md-to-mdx - INFO - 处理文件: en/guides/workflow/shortcut-key.md +2025-03-21 11:46:22,245 - md-to-mdx - INFO - 转换完成: en/guides/workflow/shortcut-key.mdx +2025-03-21 11:46:22,245 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/shortcut-key.md +2025-03-21 11:46:22,245 - md-to-mdx - INFO - 处理文件: en/guides/workflow/key-concepts.md +2025-03-21 11:46:22,246 - md-to-mdx - INFO - 转换完成: en/guides/workflow/key-concepts.mdx +2025-03-21 11:46:22,246 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/key-concepts.md +2025-03-21 11:46:22,246 - md-to-mdx - INFO - 处理文件: en/guides/workflow/bulletin.md +2025-03-21 11:46:22,247 - md-to-mdx - INFO - 转换完成: en/guides/workflow/bulletin.mdx +2025-03-21 11:46:22,247 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/bulletin.md +2025-03-21 11:46:22,247 - md-to-mdx - INFO - 处理文件: en/guides/workflow/publish.md +2025-03-21 11:46:22,247 - md-to-mdx - INFO - 转换完成: en/guides/workflow/publish.mdx +2025-03-21 11:46:22,247 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/publish.md +2025-03-21 11:46:22,247 - md-to-mdx - INFO - 处理文件: en/guides/workflow/file-upload.md +2025-03-21 11:46:22,248 - md-to-mdx - INFO - 转换完成: en/guides/workflow/file-upload.mdx +2025-03-21 11:46:22,248 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/file-upload.md +2025-03-21 11:46:22,248 - md-to-mdx - INFO - 处理文件: en/guides/workflow/README.md +2025-03-21 11:46:22,249 - md-to-mdx - INFO - 转换完成: en/guides/workflow/README.mdx +2025-03-21 11:46:22,249 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/README.md +2025-03-21 11:46:22,249 - md-to-mdx - INFO - 处理文件: en/guides/workflow/orchestrate-node.md +2025-03-21 11:46:22,250 - md-to-mdx - INFO - 转换完成: en/guides/workflow/orchestrate-node.mdx +2025-03-21 11:46:22,250 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/orchestrate-node.md +2025-03-21 11:46:22,250 - md-to-mdx - INFO - 处理文件: en/guides/workflow/additional-features.md +2025-03-21 11:46:22,250 - md-to-mdx - INFO - 转换完成: en/guides/workflow/additional-features.mdx +2025-03-21 11:46:22,251 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/additional-features.md +2025-03-21 11:46:22,251 - md-to-mdx - INFO - 处理文件: en/guides/workflow/export_import.md +2025-03-21 11:46:22,251 - md-to-mdx - INFO - 转换完成: en/guides/workflow/export_import.mdx +2025-03-21 11:46:22,251 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/export_import.md +2025-03-21 11:46:22,251 - md-to-mdx - INFO - 处理文件: en/guides/workflow/variables.md +2025-03-21 11:46:22,251 - md-to-mdx - INFO - 转换完成: en/guides/workflow/variables.mdx +2025-03-21 11:46:22,251 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/variables.md +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/history.md +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/history.mdx +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/history.md +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/README.md +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/README.mdx +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/README.md +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/log.md +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/log.mdx +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/log.md +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/checklist.md +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/checklist.mdx +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/checklist.md +2025-03-21 11:46:22,252 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md +2025-03-21 11:46:22,253 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.mdx +2025-03-21 11:46:22,253 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md +2025-03-21 11:46:22,253 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/step-run.md +2025-03-21 11:46:22,253 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/step-run.mdx +2025-03-21 11:46:22,253 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/step-run.md +2025-03-21 11:46:22,253 - md-to-mdx - INFO - 处理文件: en/guides/workflow/error-handling/error-type.md +2025-03-21 11:46:22,254 - md-to-mdx - INFO - 转换完成: en/guides/workflow/error-handling/error-type.mdx +2025-03-21 11:46:22,254 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/error-handling/error-type.md +2025-03-21 11:46:22,254 - md-to-mdx - INFO - 处理文件: en/guides/workflow/error-handling/predefined-error-handling-logic.md +2025-03-21 11:46:22,254 - md-to-mdx - INFO - 转换完成: en/guides/workflow/error-handling/predefined-error-handling-logic.mdx +2025-03-21 11:46:22,254 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/error-handling/predefined-error-handling-logic.md +2025-03-21 11:46:22,254 - md-to-mdx - INFO - 处理文件: en/guides/workflow/error-handling/README.md +2025-03-21 11:46:22,255 - md-to-mdx - INFO - 转换完成: en/guides/workflow/error-handling/README.mdx +2025-03-21 11:46:22,255 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/error-handling/README.md +2025-03-21 11:46:22,256 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/agent.md +2025-03-21 11:46:22,256 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/agent.mdx +2025-03-21 11:46:22,256 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/agent.md +2025-03-21 11:46:22,256 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/variable-assigner.md +2025-03-21 11:46:22,257 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/variable-assigner.mdx +2025-03-21 11:46:22,257 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/variable-assigner.md +2025-03-21 11:46:22,257 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/template.md +2025-03-21 11:46:22,257 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/template.mdx +2025-03-21 11:46:22,257 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/template.md +2025-03-21 11:46:22,257 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/loop.md +2025-03-21 11:46:22,257 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/loop.mdx +2025-03-21 11:46:22,257 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/loop.md +2025-03-21 11:46:22,257 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/parameter-extractor.md +2025-03-21 11:46:22,258 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/parameter-extractor.mdx +2025-03-21 11:46:22,258 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/parameter-extractor.md +2025-03-21 11:46:22,258 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/variable-aggregator.md +2025-03-21 11:46:22,258 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/variable-aggregator.mdx +2025-03-21 11:46:22,258 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/variable-aggregator.md +2025-03-21 11:46:22,258 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/knowledge-retrieval.md +2025-03-21 11:46:22,258 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/knowledge-retrieval.mdx +2025-03-21 11:46:22,258 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/knowledge-retrieval.md +2025-03-21 11:46:22,258 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/doc-extractor.md +2025-03-21 11:46:22,259 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/doc-extractor.mdx +2025-03-21 11:46:22,259 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/doc-extractor.md +2025-03-21 11:46:22,259 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/list-operator.md +2025-03-21 11:46:22,259 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/list-operator.mdx +2025-03-21 11:46:22,259 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/list-operator.md +2025-03-21 11:46:22,259 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/http-request.md +2025-03-21 11:46:22,260 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/http-request.mdx +2025-03-21 11:46:22,260 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/http-request.md +2025-03-21 11:46:22,260 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/README.md +2025-03-21 11:46:22,260 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/README.mdx +2025-03-21 11:46:22,260 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/README.md +2025-03-21 11:46:22,260 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/answer.md +2025-03-21 11:46:22,260 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/answer.mdx +2025-03-21 11:46:22,260 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/answer.md +2025-03-21 11:46:22,260 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/iteration.md +2025-03-21 11:46:22,261 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/iteration.mdx +2025-03-21 11:46:22,261 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/iteration.md +2025-03-21 11:46:22,261 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/llm.md +2025-03-21 11:46:22,266 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/llm.mdx +2025-03-21 11:46:22,266 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/llm.md +2025-03-21 11:46:22,266 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/code.md +2025-03-21 11:46:22,266 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/code.mdx +2025-03-21 11:46:22,266 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/code.md +2025-03-21 11:46:22,266 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/end.md +2025-03-21 11:46:22,267 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/end.mdx +2025-03-21 11:46:22,267 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/end.md +2025-03-21 11:46:22,267 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/start.md +2025-03-21 11:46:22,267 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/start.mdx +2025-03-21 11:46:22,267 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/start.md +2025-03-21 11:46:22,267 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/ifelse.md +2025-03-21 11:46:22,267 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/ifelse.mdx +2025-03-21 11:46:22,267 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/ifelse.md +2025-03-21 11:46:22,267 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/question-classifier.md +2025-03-21 11:46:22,268 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/question-classifier.mdx +2025-03-21 11:46:22,268 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/question-classifier.md +2025-03-21 11:46:22,268 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/tools.md +2025-03-21 11:46:22,268 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/tools.mdx +2025-03-21 11:46:22,268 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/tools.md +2025-03-21 11:57:14,153 - md-to-mdx - INFO - 处理文件: en/guides/workflow/shortcut-key.md +2025-03-21 11:57:14,156 - md-to-mdx - INFO - 转换完成: en/guides/workflow/shortcut-key.mdx +2025-03-21 11:57:14,156 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/shortcut-key.md +2025-03-21 11:57:14,156 - md-to-mdx - INFO - 处理文件: en/guides/workflow/key-concepts.md +2025-03-21 11:57:14,156 - md-to-mdx - INFO - 转换完成: en/guides/workflow/key-concepts.mdx +2025-03-21 11:57:14,156 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/key-concepts.md +2025-03-21 11:57:14,156 - md-to-mdx - INFO - 处理文件: en/guides/workflow/bulletin.md +2025-03-21 11:57:14,158 - md-to-mdx - INFO - 转换完成: en/guides/workflow/bulletin.mdx +2025-03-21 11:57:14,158 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/bulletin.md +2025-03-21 11:57:14,158 - md-to-mdx - INFO - 处理文件: en/guides/workflow/publish.md +2025-03-21 11:57:14,158 - md-to-mdx - INFO - 转换完成: en/guides/workflow/publish.mdx +2025-03-21 11:57:14,158 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/publish.md +2025-03-21 11:57:14,158 - md-to-mdx - INFO - 处理文件: en/guides/workflow/file-upload.md +2025-03-21 11:57:14,159 - md-to-mdx - INFO - 转换完成: en/guides/workflow/file-upload.mdx +2025-03-21 11:57:14,159 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/file-upload.md +2025-03-21 11:57:14,159 - md-to-mdx - INFO - 处理文件: en/guides/workflow/README.md +2025-03-21 11:57:14,159 - md-to-mdx - INFO - 转换完成: en/guides/workflow/README.mdx +2025-03-21 11:57:14,160 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/README.md +2025-03-21 11:57:14,160 - md-to-mdx - INFO - 处理文件: en/guides/workflow/orchestrate-node.md +2025-03-21 11:57:14,161 - md-to-mdx - INFO - 转换完成: en/guides/workflow/orchestrate-node.mdx +2025-03-21 11:57:14,161 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/orchestrate-node.md +2025-03-21 11:57:14,161 - md-to-mdx - INFO - 处理文件: en/guides/workflow/additional-features.md +2025-03-21 11:57:14,161 - md-to-mdx - INFO - 转换完成: en/guides/workflow/additional-features.mdx +2025-03-21 11:57:14,161 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/additional-features.md +2025-03-21 11:57:14,161 - md-to-mdx - INFO - 处理文件: en/guides/workflow/export_import.md +2025-03-21 11:57:14,162 - md-to-mdx - INFO - 转换完成: en/guides/workflow/export_import.mdx +2025-03-21 11:57:14,162 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/export_import.md +2025-03-21 11:57:14,162 - md-to-mdx - INFO - 处理文件: en/guides/workflow/variables.md +2025-03-21 11:57:14,162 - md-to-mdx - INFO - 转换完成: en/guides/workflow/variables.mdx +2025-03-21 11:57:14,162 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/variables.md +2025-03-21 11:57:14,162 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/history.md +2025-03-21 11:57:14,162 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/history.mdx +2025-03-21 11:57:14,162 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/history.md +2025-03-21 11:57:14,162 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/README.md +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/README.mdx +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/README.md +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/log.md +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/log.mdx +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/log.md +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/checklist.md +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/checklist.mdx +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/checklist.md +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md +2025-03-21 11:57:14,163 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.mdx +2025-03-21 11:57:14,164 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md +2025-03-21 11:57:14,164 - md-to-mdx - INFO - 处理文件: en/guides/workflow/debug-and-preview/step-run.md +2025-03-21 11:57:14,164 - md-to-mdx - INFO - 转换完成: en/guides/workflow/debug-and-preview/step-run.mdx +2025-03-21 11:57:14,164 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/debug-and-preview/step-run.md +2025-03-21 11:57:14,164 - md-to-mdx - INFO - 处理文件: en/guides/workflow/error-handling/error-type.md +2025-03-21 11:57:14,165 - md-to-mdx - INFO - 转换完成: en/guides/workflow/error-handling/error-type.mdx +2025-03-21 11:57:14,165 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/error-handling/error-type.md +2025-03-21 11:57:14,165 - md-to-mdx - INFO - 处理文件: en/guides/workflow/error-handling/predefined-error-handling-logic.md +2025-03-21 11:57:14,166 - md-to-mdx - INFO - 转换完成: en/guides/workflow/error-handling/predefined-error-handling-logic.mdx +2025-03-21 11:57:14,166 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/error-handling/predefined-error-handling-logic.md +2025-03-21 11:57:14,166 - md-to-mdx - INFO - 处理文件: en/guides/workflow/error-handling/README.md +2025-03-21 11:57:14,166 - md-to-mdx - INFO - 转换完成: en/guides/workflow/error-handling/README.mdx +2025-03-21 11:57:14,166 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/error-handling/README.md +2025-03-21 11:57:14,166 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/agent.md +2025-03-21 11:57:14,167 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/agent.mdx +2025-03-21 11:57:14,167 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/agent.md +2025-03-21 11:57:14,167 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/variable-assigner.md +2025-03-21 11:57:14,168 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/variable-assigner.mdx +2025-03-21 11:57:14,168 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/variable-assigner.md +2025-03-21 11:57:14,168 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/template.md +2025-03-21 11:57:14,168 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/template.mdx +2025-03-21 11:57:14,168 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/template.md +2025-03-21 11:57:14,168 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/loop.md +2025-03-21 11:57:14,169 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/loop.mdx +2025-03-21 11:57:14,169 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/loop.md +2025-03-21 11:57:14,169 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/parameter-extractor.md +2025-03-21 11:57:14,169 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/parameter-extractor.mdx +2025-03-21 11:57:14,169 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/parameter-extractor.md +2025-03-21 11:57:14,169 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/variable-aggregator.md +2025-03-21 11:57:14,169 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/variable-aggregator.mdx +2025-03-21 11:57:14,169 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/variable-aggregator.md +2025-03-21 11:57:14,169 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/knowledge-retrieval.md +2025-03-21 11:57:14,170 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/knowledge-retrieval.mdx +2025-03-21 11:57:14,170 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/knowledge-retrieval.md +2025-03-21 11:57:14,170 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/doc-extractor.md +2025-03-21 11:57:14,170 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/doc-extractor.mdx +2025-03-21 11:57:14,170 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/doc-extractor.md +2025-03-21 11:57:14,170 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/list-operator.md +2025-03-21 11:57:14,170 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/list-operator.mdx +2025-03-21 11:57:14,170 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/list-operator.md +2025-03-21 11:57:14,170 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/http-request.md +2025-03-21 11:57:14,171 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/http-request.mdx +2025-03-21 11:57:14,171 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/http-request.md +2025-03-21 11:57:14,171 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/README.md +2025-03-21 11:57:14,171 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/README.mdx +2025-03-21 11:57:14,171 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/README.md +2025-03-21 11:57:14,171 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/answer.md +2025-03-21 11:57:14,171 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/answer.mdx +2025-03-21 11:57:14,171 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/answer.md +2025-03-21 11:57:14,171 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/iteration.md +2025-03-21 11:57:14,172 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/iteration.mdx +2025-03-21 11:57:14,172 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/iteration.md +2025-03-21 11:57:14,172 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/llm.md +2025-03-21 11:57:14,177 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/llm.mdx +2025-03-21 11:57:14,177 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/llm.md +2025-03-21 11:57:14,177 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/code.md +2025-03-21 11:57:14,177 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/code.mdx +2025-03-21 11:57:14,177 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/code.md +2025-03-21 11:57:14,177 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/end.md +2025-03-21 11:57:14,177 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/end.mdx +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/end.md +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/start.md +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/start.mdx +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/start.md +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/ifelse.md +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/ifelse.mdx +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/ifelse.md +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/question-classifier.md +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/question-classifier.mdx +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/question-classifier.md +2025-03-21 11:57:14,178 - md-to-mdx - INFO - 处理文件: en/guides/workflow/node/tools.md +2025-03-21 11:57:14,179 - md-to-mdx - INFO - 转换完成: en/guides/workflow/node/tools.mdx +2025-03-21 11:57:14,179 - md-to-mdx - INFO - 已删除源文件: en/guides/workflow/node/tools.md +2025-03-21 14:05:47,321 - md-to-mdx - ERROR - 无效的输入路径: /Users/allen/Documents/dify-docs-mintlify/en/guides/workflow/additional-features.mdx +2025-03-21 14:07:17,467 - md-to-mdx - ERROR - 无效的输入路径: /Users/allen/Documents/dify-docs-mintlify/en/guides/workflow/additional-features.mdx +2025-03-21 14:09:13,272 - md-to-mdx - ERROR - 无效的输入路径: /Users/allen/Documents/dify-docs-mintlify/en/guides/workflow/additional-features.mdx +2025-03-21 14:11:26,513 - md-to-mdx - ERROR - 无效的输入路径: /Users/allen/Documents/dify-docs-mintlify/en/guides/workflow/additional-features.mdx +2025-03-21 14:37:57,826 - md-to-mdx - INFO - 处理文件: en/guides/application-publishing/embedding-in-websites.md +2025-03-21 14:37:57,828 - md-to-mdx - INFO - 转换完成: en/guides/application-publishing/embedding-in-websites.mdx +2025-03-21 14:37:57,828 - md-to-mdx - INFO - 已删除源文件: en/guides/application-publishing/embedding-in-websites.md +2025-03-21 14:37:57,828 - md-to-mdx - INFO - 处理文件: en/guides/application-publishing/developing-with-apis.md +2025-03-21 14:37:57,828 - md-to-mdx - INFO - 转换完成: en/guides/application-publishing/developing-with-apis.mdx +2025-03-21 14:37:57,828 - md-to-mdx - INFO - 已删除源文件: en/guides/application-publishing/developing-with-apis.md +2025-03-21 14:37:57,828 - md-to-mdx - INFO - 处理文件: en/guides/application-publishing/README.md +2025-03-21 14:37:57,829 - md-to-mdx - INFO - 转换完成: en/guides/application-publishing/README.mdx +2025-03-21 14:37:57,829 - md-to-mdx - INFO - 已删除源文件: en/guides/application-publishing/README.md +2025-03-21 14:37:57,829 - md-to-mdx - INFO - 处理文件: en/guides/application-publishing/based-on-frontend-templates.md +2025-03-21 14:37:57,829 - md-to-mdx - INFO - 转换完成: en/guides/application-publishing/based-on-frontend-templates.mdx +2025-03-21 14:37:57,829 - md-to-mdx - INFO - 已删除源文件: en/guides/application-publishing/based-on-frontend-templates.md +2025-03-21 14:37:57,829 - md-to-mdx - INFO - 处理文件: en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.md +2025-03-21 14:37:57,830 - md-to-mdx - INFO - 转换完成: en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.mdx +2025-03-21 14:37:57,830 - md-to-mdx - INFO - 已删除源文件: en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.md +2025-03-21 14:37:57,830 - md-to-mdx - INFO - 处理文件: en/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.md +2025-03-21 14:37:57,830 - md-to-mdx - INFO - 转换完成: en/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.mdx +2025-03-21 14:37:57,830 - md-to-mdx - INFO - 已删除源文件: en/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.md +2025-03-21 14:37:57,830 - md-to-mdx - INFO - 处理文件: en/guides/application-publishing/launch-your-webapp-quickly/README.md +2025-03-21 14:37:57,831 - md-to-mdx - INFO - 转换完成: en/guides/application-publishing/launch-your-webapp-quickly/README.mdx +2025-03-21 14:37:57,831 - md-to-mdx - INFO - 已删除源文件: en/guides/application-publishing/launch-your-webapp-quickly/README.md +2025-03-21 14:37:57,831 - md-to-mdx - INFO - 处理文件: en/guides/application-publishing/launch-your-webapp-quickly/text-generator.md +2025-03-21 14:37:57,832 - md-to-mdx - INFO - 转换完成: en/guides/application-publishing/launch-your-webapp-quickly/text-generator.mdx +2025-03-21 14:37:57,832 - md-to-mdx - INFO - 已删除源文件: en/guides/application-publishing/launch-your-webapp-quickly/text-generator.md +2025-03-21 14:41:00,708 - md-to-mdx - INFO - 处理文件: /Users/allen/Documents/dify-docs-mintlify/en/guides/annotation/logs.md +2025-03-21 14:41:00,713 - md-to-mdx - INFO - 转换完成: /Users/allen/Documents/dify-docs-mintlify/en/guides/annotation/logs.mdx +2025-03-21 14:41:00,713 - md-to-mdx - INFO - 已删除源文件: /Users/allen/Documents/dify-docs-mintlify/en/guides/annotation/logs.md +2025-03-21 14:41:00,714 - md-to-mdx - INFO - 处理文件: /Users/allen/Documents/dify-docs-mintlify/en/guides/annotation/annotation-reply.md +2025-03-21 14:41:00,719 - md-to-mdx - INFO - 转换完成: /Users/allen/Documents/dify-docs-mintlify/en/guides/annotation/annotation-reply.mdx +2025-03-21 14:41:00,720 - md-to-mdx - INFO - 已删除源文件: /Users/allen/Documents/dify-docs-mintlify/en/guides/annotation/annotation-reply.md +2025-03-21 14:41:00,720 - md-to-mdx - INFO - 处理文件: /Users/allen/Documents/dify-docs-mintlify/en/guides/annotation/README.md +2025-03-21 14:41:00,722 - md-to-mdx - INFO - 转换完成: /Users/allen/Documents/dify-docs-mintlify/en/guides/annotation/README.mdx +2025-03-21 14:41:00,722 - md-to-mdx - INFO - 已删除源文件: /Users/allen/Documents/dify-docs-mintlify/en/guides/annotation/README.md +2025-03-21 14:43:58,317 - md-to-mdx - INFO - 处理文件: en/guides/monitoring/analysis.md +2025-03-21 14:43:58,324 - md-to-mdx - INFO - 转换完成: en/guides/monitoring/analysis.mdx +2025-03-21 14:43:58,325 - md-to-mdx - INFO - 已删除源文件: en/guides/monitoring/analysis.md +2025-03-21 14:43:58,325 - md-to-mdx - INFO - 处理文件: en/guides/monitoring/README.md +2025-03-21 14:43:58,327 - md-to-mdx - INFO - 转换完成: en/guides/monitoring/README.mdx +2025-03-21 14:43:58,327 - md-to-mdx - INFO - 已删除源文件: en/guides/monitoring/README.md +2025-03-21 14:43:58,329 - md-to-mdx - INFO - 处理文件: en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.md +2025-03-21 14:43:58,332 - md-to-mdx - INFO - 转换完成: en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.mdx +2025-03-21 14:43:58,332 - md-to-mdx - INFO - 已删除源文件: en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.md +2025-03-21 14:43:58,333 - md-to-mdx - INFO - 处理文件: en/guides/monitoring/integrate-external-ops-tools/integrate-opik.md +2025-03-21 14:43:58,338 - md-to-mdx - INFO - 转换完成: en/guides/monitoring/integrate-external-ops-tools/integrate-opik.mdx +2025-03-21 14:43:58,339 - md-to-mdx - INFO - 已删除源文件: en/guides/monitoring/integrate-external-ops-tools/integrate-opik.md +2025-03-21 14:43:58,339 - md-to-mdx - INFO - 处理文件: en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.md +2025-03-21 14:43:58,351 - md-to-mdx - INFO - 转换完成: en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.mdx +2025-03-21 14:43:58,351 - md-to-mdx - INFO - 已删除源文件: en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.md +2025-03-21 14:43:58,351 - md-to-mdx - INFO - 处理文件: en/guides/monitoring/integrate-external-ops-tools/README.md +2025-03-21 14:43:58,352 - md-to-mdx - INFO - 转换完成: en/guides/monitoring/integrate-external-ops-tools/README.mdx +2025-03-21 14:43:58,352 - md-to-mdx - INFO - 已删除源文件: en/guides/monitoring/integrate-external-ops-tools/README.md +2025-03-21 14:49:17,406 - md-to-mdx - INFO - 处理文件: en/guides/extension/README.md +2025-03-21 14:49:17,411 - md-to-mdx - INFO - 转换完成: en/guides/extension/README.mdx +2025-03-21 14:49:17,411 - md-to-mdx - INFO - 已删除源文件: en/guides/extension/README.md +2025-03-21 14:49:17,412 - md-to-mdx - INFO - 处理文件: en/guides/extension/api-based-extension/external-data-tool.md +2025-03-21 14:49:17,413 - md-to-mdx - INFO - 转换完成: en/guides/extension/api-based-extension/external-data-tool.mdx +2025-03-21 14:49:17,413 - md-to-mdx - INFO - 已删除源文件: en/guides/extension/api-based-extension/external-data-tool.md +2025-03-21 14:49:17,413 - md-to-mdx - INFO - 处理文件: en/guides/extension/api-based-extension/README.md +2025-03-21 14:49:17,416 - md-to-mdx - INFO - 转换完成: en/guides/extension/api-based-extension/README.mdx +2025-03-21 14:49:17,416 - md-to-mdx - INFO - 已删除源文件: en/guides/extension/api-based-extension/README.md +2025-03-21 14:49:17,416 - md-to-mdx - INFO - 处理文件: en/guides/extension/api-based-extension/cloudflare-workers.md +2025-03-21 14:49:17,417 - md-to-mdx - INFO - 转换完成: en/guides/extension/api-based-extension/cloudflare-workers.mdx +2025-03-21 14:49:17,418 - md-to-mdx - INFO - 已删除源文件: en/guides/extension/api-based-extension/cloudflare-workers.md +2025-03-21 14:49:17,418 - md-to-mdx - INFO - 处理文件: en/guides/extension/api-based-extension/moderation-extension.md +2025-03-21 14:49:17,418 - md-to-mdx - INFO - 转换完成: en/guides/extension/api-based-extension/moderation-extension.mdx +2025-03-21 14:49:17,418 - md-to-mdx - INFO - 已删除源文件: en/guides/extension/api-based-extension/moderation-extension.md +2025-03-21 14:49:17,419 - md-to-mdx - INFO - 处理文件: en/guides/extension/api-based-extension/moderation.md +2025-03-21 14:49:17,419 - md-to-mdx - INFO - 转换完成: en/guides/extension/api-based-extension/moderation.mdx +2025-03-21 14:49:17,419 - md-to-mdx - INFO - 已删除源文件: en/guides/extension/api-based-extension/moderation.md +2025-03-21 14:49:17,420 - md-to-mdx - INFO - 处理文件: en/guides/extension/code-based-extension/external-data-tool.md +2025-03-21 14:49:17,420 - md-to-mdx - INFO - 转换完成: en/guides/extension/code-based-extension/external-data-tool.mdx +2025-03-21 14:49:17,420 - md-to-mdx - INFO - 已删除源文件: en/guides/extension/code-based-extension/external-data-tool.md +2025-03-21 14:49:17,421 - md-to-mdx - INFO - 处理文件: en/guides/extension/code-based-extension/README.md +2025-03-21 14:49:17,423 - md-to-mdx - INFO - 转换完成: en/guides/extension/code-based-extension/README.mdx +2025-03-21 14:49:17,423 - md-to-mdx - INFO - 已删除源文件: en/guides/extension/code-based-extension/README.md +2025-03-21 14:49:17,424 - md-to-mdx - INFO - 处理文件: en/guides/extension/code-based-extension/moderation.md +2025-03-21 14:49:17,426 - md-to-mdx - INFO - 转换完成: en/guides/extension/code-based-extension/moderation.mdx +2025-03-21 14:49:17,426 - md-to-mdx - INFO - 已删除源文件: en/guides/extension/code-based-extension/moderation.md +2025-03-21 14:52:20,352 - md-to-mdx - INFO - 处理文件: en/guides/workspace/explore.md +2025-03-21 14:52:20,356 - md-to-mdx - INFO - 转换完成: en/guides/workspace/explore.mdx +2025-03-21 14:52:20,357 - md-to-mdx - INFO - 已删除源文件: en/guides/workspace/explore.md +2025-03-21 14:52:20,357 - md-to-mdx - INFO - 处理文件: en/guides/workspace/app.md +2025-03-21 14:52:20,358 - md-to-mdx - INFO - 转换完成: en/guides/workspace/app.mdx +2025-03-21 14:52:20,358 - md-to-mdx - INFO - 已删除源文件: en/guides/workspace/app.md +2025-03-21 14:52:20,358 - md-to-mdx - INFO - 处理文件: en/guides/workspace/billing.md +2025-03-21 14:52:20,359 - md-to-mdx - INFO - 转换完成: en/guides/workspace/billing.mdx +2025-03-21 14:52:20,359 - md-to-mdx - INFO - 已删除源文件: en/guides/workspace/billing.md +2025-03-21 14:52:20,359 - md-to-mdx - INFO - 处理文件: en/guides/workspace/README.md +2025-03-21 14:52:20,359 - md-to-mdx - INFO - 转换完成: en/guides/workspace/README.mdx +2025-03-21 14:52:20,360 - md-to-mdx - INFO - 已删除源文件: en/guides/workspace/README.md +2025-03-21 14:52:20,360 - md-to-mdx - INFO - 处理文件: en/guides/workspace/invite-and-manage-members.md +2025-03-21 14:52:20,360 - md-to-mdx - INFO - 转换完成: en/guides/workspace/invite-and-manage-members.mdx +2025-03-21 14:52:20,360 - md-to-mdx - INFO - 已删除源文件: en/guides/workspace/invite-and-manage-members.md +2025-03-21 14:52:20,361 - md-to-mdx - INFO - 处理文件: en/guides/workspace/app/README.md +2025-03-21 14:52:20,362 - md-to-mdx - INFO - 转换完成: en/guides/workspace/app/README.mdx +2025-03-21 14:52:20,362 - md-to-mdx - INFO - 已删除源文件: en/guides/workspace/app/README.md +2025-03-21 14:53:41,266 - md-to-mdx - INFO - 处理文件: en/guides/management/personal-account-management.md +2025-03-21 14:53:41,275 - md-to-mdx - INFO - 转换完成: en/guides/management/personal-account-management.mdx +2025-03-21 14:53:41,276 - md-to-mdx - INFO - 已删除源文件: en/guides/management/personal-account-management.md +2025-03-21 14:53:41,277 - md-to-mdx - INFO - 处理文件: en/guides/management/subscription-management.md +2025-03-21 14:53:41,280 - md-to-mdx - INFO - 转换完成: en/guides/management/subscription-management.mdx +2025-03-21 14:53:41,281 - md-to-mdx - INFO - 已删除源文件: en/guides/management/subscription-management.md +2025-03-21 14:53:41,282 - md-to-mdx - INFO - 处理文件: en/guides/management/version-control.md +2025-03-21 14:53:41,288 - md-to-mdx - INFO - 转换完成: en/guides/management/version-control.mdx +2025-03-21 14:53:41,289 - md-to-mdx - INFO - 已删除源文件: en/guides/management/version-control.md +2025-03-21 14:53:41,289 - md-to-mdx - INFO - 处理文件: en/guides/management/team-members-management.md +2025-03-21 14:53:41,289 - md-to-mdx - INFO - 转换完成: en/guides/management/team-members-management.mdx +2025-03-21 14:53:41,290 - md-to-mdx - INFO - 已删除源文件: en/guides/management/team-members-management.md +2025-03-21 14:53:41,290 - md-to-mdx - INFO - 处理文件: en/guides/management/README.md +2025-03-21 14:53:41,290 - md-to-mdx - INFO - 转换完成: en/guides/management/README.mdx +2025-03-21 14:53:41,290 - md-to-mdx - INFO - 已删除源文件: en/guides/management/README.md +2025-03-21 14:53:41,290 - md-to-mdx - INFO - 处理文件: en/guides/management/app-management.md +2025-03-21 14:53:41,291 - md-to-mdx - INFO - 转换完成: en/guides/management/app-management.mdx +2025-03-21 14:53:41,291 - md-to-mdx - INFO - 已删除源文件: en/guides/management/app-management.md +2025-03-21 14:59:39,928 - md-to-mdx - INFO - 处理文件: en/workshop/basic/build-ai-image-generation-app.md +2025-03-21 14:59:39,935 - md-to-mdx - INFO - 转换完成: en/workshop/basic/build-ai-image-generation-app.mdx +2025-03-21 14:59:39,936 - md-to-mdx - INFO - 已删除源文件: en/workshop/basic/build-ai-image-generation-app.md +2025-03-21 14:59:39,936 - md-to-mdx - INFO - 处理文件: en/workshop/basic/README.md +2025-03-21 14:59:39,937 - md-to-mdx - INFO - 转换完成: en/workshop/basic/README.mdx +2025-03-21 14:59:39,937 - md-to-mdx - INFO - 已删除源文件: en/workshop/basic/README.md +2025-03-21 14:59:39,937 - md-to-mdx - INFO - 处理文件: en/workshop/intermediate/twitter-chatflow.md +2025-03-21 14:59:39,942 - md-to-mdx - INFO - 转换完成: en/workshop/intermediate/twitter-chatflow.mdx +2025-03-21 14:59:39,943 - md-to-mdx - INFO - 已删除源文件: en/workshop/intermediate/twitter-chatflow.md +2025-03-21 14:59:39,943 - md-to-mdx - INFO - 处理文件: en/workshop/intermediate/customer-service-bot.md +2025-03-21 14:59:39,946 - md-to-mdx - INFO - 转换完成: en/workshop/intermediate/customer-service-bot.mdx +2025-03-21 14:59:39,946 - md-to-mdx - INFO - 已删除源文件: en/workshop/intermediate/customer-service-bot.md +2025-03-21 14:59:39,946 - md-to-mdx - INFO - 处理文件: en/workshop/intermediate/README.md +2025-03-21 14:59:39,946 - md-to-mdx - INFO - 转换完成: en/workshop/intermediate/README.mdx +2025-03-21 14:59:39,946 - md-to-mdx - INFO - 已删除源文件: en/workshop/intermediate/README.md +2025-03-21 14:59:39,946 - md-to-mdx - INFO - 处理文件: en/workshop/intermediate/article-reader.md +2025-03-21 14:59:39,947 - md-to-mdx - INFO - 转换完成: en/workshop/intermediate/article-reader.mdx +2025-03-21 14:59:39,947 - md-to-mdx - INFO - 已删除源文件: en/workshop/intermediate/article-reader.md +2025-03-21 16:18:24,824 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/cloud.md +2025-03-21 16:18:24,829 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/cloud.mdx +2025-03-21 16:18:24,831 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/cloud.md +2025-03-21 16:18:24,831 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/dify-premium.md +2025-03-21 16:18:24,833 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/dify-premium.mdx +2025-03-21 16:18:24,834 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/dify-premium.md +2025-03-21 16:18:24,837 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/install-self-hosted/environments.md +2025-03-21 16:18:24,839 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/install-self-hosted/environments.mdx +2025-03-21 16:18:24,840 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/install-self-hosted/environments.md +2025-03-21 16:18:24,840 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/install-self-hosted/bt-panel.md +2025-03-21 16:18:24,842 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/install-self-hosted/bt-panel.mdx +2025-03-21 16:18:24,842 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/install-self-hosted/bt-panel.md +2025-03-21 16:18:24,842 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/install-self-hosted/local-source-code.md +2025-03-21 16:18:24,843 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/install-self-hosted/local-source-code.mdx +2025-03-21 16:18:24,843 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/install-self-hosted/local-source-code.md +2025-03-21 16:18:24,843 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/install-self-hosted/faq.md +2025-03-21 16:18:24,844 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/install-self-hosted/faq.mdx +2025-03-21 16:18:24,844 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/install-self-hosted/faq.md +2025-03-21 16:18:24,844 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/install-self-hosted/zeabur.md +2025-03-21 16:18:24,844 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/install-self-hosted/zeabur.mdx +2025-03-21 16:18:24,844 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/install-self-hosted/zeabur.md +2025-03-21 16:18:24,844 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/install-self-hosted/README.md +2025-03-21 16:18:24,845 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/install-self-hosted/README.mdx +2025-03-21 16:18:24,845 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/install-self-hosted/README.md +2025-03-21 16:18:24,845 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/install-self-hosted/docker-compose.md +2025-03-21 16:18:24,846 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/install-self-hosted/docker-compose.mdx +2025-03-21 16:18:24,846 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/install-self-hosted/docker-compose.md +2025-03-21 16:18:24,846 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/install-self-hosted/start-the-frontend-docker-container.md +2025-03-21 16:18:24,846 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/install-self-hosted/start-the-frontend-docker-container.mdx +2025-03-21 16:18:24,846 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/install-self-hosted/start-the-frontend-docker-container.md +2025-03-21 16:18:24,847 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/readme/features-and-specifications.md +2025-03-21 16:18:24,847 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/readme/features-and-specifications.mdx +2025-03-21 16:18:24,847 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/readme/features-and-specifications.md +2025-03-21 16:18:24,847 - md-to-mdx - INFO - 处理文件: ja-jp/getting-started/readme/model-providers.md +2025-03-21 16:18:24,848 - md-to-mdx - INFO - 转换完成: ja-jp/getting-started/readme/model-providers.mdx +2025-03-21 16:18:24,848 - md-to-mdx - INFO - 已删除源文件: ja-jp/getting-started/readme/model-providers.md +2025-03-21 16:45:44,873 - md-to-mdx - INFO - 处理文件: ja-jp/guides/model-configuration/predefined-model.md +2025-03-21 16:45:44,878 - md-to-mdx - INFO - 转换完成: ja-jp/guides/model-configuration/predefined-model.mdx +2025-03-21 16:45:44,878 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/model-configuration/predefined-model.md +2025-03-21 16:45:44,878 - md-to-mdx - INFO - 处理文件: ja-jp/guides/model-configuration/schema.md +2025-03-21 16:45:44,879 - md-to-mdx - INFO - 转换完成: ja-jp/guides/model-configuration/schema.mdx +2025-03-21 16:45:44,879 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/model-configuration/schema.md +2025-03-21 16:45:44,879 - md-to-mdx - INFO - 处理文件: ja-jp/guides/model-configuration/customizable-model.md +2025-03-21 16:45:44,880 - md-to-mdx - INFO - 转换完成: ja-jp/guides/model-configuration/customizable-model.mdx +2025-03-21 16:45:44,880 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/model-configuration/customizable-model.md +2025-03-21 16:45:44,880 - md-to-mdx - INFO - 处理文件: ja-jp/guides/model-configuration/README.md +2025-03-21 16:45:44,881 - md-to-mdx - INFO - 转换完成: ja-jp/guides/model-configuration/README.mdx +2025-03-21 16:45:44,881 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/model-configuration/README.md +2025-03-21 16:45:44,881 - md-to-mdx - INFO - 处理文件: ja-jp/guides/model-configuration/interfaces.md +2025-03-21 16:45:44,882 - md-to-mdx - INFO - 转换完成: ja-jp/guides/model-configuration/interfaces.mdx +2025-03-21 16:45:44,883 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/model-configuration/interfaces.md +2025-03-21 16:45:44,883 - md-to-mdx - INFO - 处理文件: ja-jp/guides/model-configuration/new-provider.md +2025-03-21 16:45:44,884 - md-to-mdx - INFO - 转换完成: ja-jp/guides/model-configuration/new-provider.mdx +2025-03-21 16:45:44,884 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/model-configuration/new-provider.md +2025-03-21 16:45:44,884 - md-to-mdx - INFO - 处理文件: ja-jp/guides/model-configuration/load-balancing.md +2025-03-21 16:45:44,885 - md-to-mdx - INFO - 转换完成: ja-jp/guides/model-configuration/load-balancing.mdx +2025-03-21 16:45:44,885 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/model-configuration/load-balancing.md +2025-03-21 18:28:19,992 - md-to-mdx - INFO - 处理文件: ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.md +2025-03-21 18:28:20,003 - md-to-mdx - INFO - 转换完成: ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx +2025-03-21 18:28:20,005 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.md +2025-03-21 18:28:20,005 - md-to-mdx - INFO - 处理文件: ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/README.md +2025-03-21 18:28:20,021 - md-to-mdx - INFO - 转换完成: ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/README.mdx +2025-03-21 18:28:20,021 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/README.md +2025-03-21 18:28:20,021 - md-to-mdx - INFO - 处理文件: ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md +2025-03-21 18:28:20,023 - md-to-mdx - INFO - 转换完成: ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.mdx +2025-03-21 18:28:20,023 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md +2025-03-21 19:56:19,838 - md-to-mdx - INFO - 处理文件: ja-jp/guides/application-publishing/embedding-in-websites.md +2025-03-21 19:56:19,843 - md-to-mdx - INFO - 转换完成: ja-jp/guides/application-publishing/embedding-in-websites.mdx +2025-03-21 19:56:19,843 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/application-publishing/embedding-in-websites.md +2025-03-21 19:56:19,843 - md-to-mdx - INFO - 处理文件: ja-jp/guides/application-publishing/developing-with-apis.md +2025-03-21 19:56:19,845 - md-to-mdx - INFO - 转换完成: ja-jp/guides/application-publishing/developing-with-apis.mdx +2025-03-21 19:56:19,845 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/application-publishing/developing-with-apis.md +2025-03-21 19:56:19,845 - md-to-mdx - INFO - 处理文件: ja-jp/guides/application-publishing/README.md +2025-03-21 19:56:19,846 - md-to-mdx - INFO - 转换完成: ja-jp/guides/application-publishing/README.mdx +2025-03-21 19:56:19,846 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/application-publishing/README.md +2025-03-21 19:56:19,846 - md-to-mdx - INFO - 处理文件: ja-jp/guides/application-publishing/based-on-frontend-templates.md +2025-03-21 19:56:19,847 - md-to-mdx - INFO - 转换完成: ja-jp/guides/application-publishing/based-on-frontend-templates.mdx +2025-03-21 19:56:19,847 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/application-publishing/based-on-frontend-templates.md +2025-03-21 19:56:19,848 - md-to-mdx - INFO - 处理文件: ja-jp/guides/application-publishing/launch-your-webapp-quickly/conversation-application.md +2025-03-21 19:56:19,850 - md-to-mdx - INFO - 转换完成: ja-jp/guides/application-publishing/launch-your-webapp-quickly/conversation-application.mdx +2025-03-21 19:56:19,850 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/application-publishing/launch-your-webapp-quickly/conversation-application.md +2025-03-21 19:56:19,850 - md-to-mdx - INFO - 处理文件: ja-jp/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.md +2025-03-21 19:56:19,851 - md-to-mdx - INFO - 转换完成: ja-jp/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.mdx +2025-03-21 19:56:19,851 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.md +2025-03-21 19:56:19,852 - md-to-mdx - INFO - 处理文件: ja-jp/guides/application-publishing/launch-your-webapp-quickly/README.md +2025-03-21 19:56:19,852 - md-to-mdx - INFO - 转换完成: ja-jp/guides/application-publishing/launch-your-webapp-quickly/README.mdx +2025-03-21 19:56:19,852 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/application-publishing/launch-your-webapp-quickly/README.md +2025-03-21 19:56:19,853 - md-to-mdx - INFO - 处理文件: ja-jp/guides/application-publishing/launch-your-webapp-quickly/text-generator.md +2025-03-21 19:56:19,854 - md-to-mdx - INFO - 转换完成: ja-jp/guides/application-publishing/launch-your-webapp-quickly/text-generator.mdx +2025-03-21 19:56:19,854 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/application-publishing/launch-your-webapp-quickly/text-generator.md +2025-03-21 20:00:19,213 - md-to-mdx - INFO - 处理文件: ja-jp/monitoring/analysis.md +2025-03-21 20:00:19,218 - md-to-mdx - INFO - 转换完成: ja-jp/monitoring/analysis.mdx +2025-03-21 20:00:19,218 - md-to-mdx - INFO - 已删除源文件: ja-jp/monitoring/analysis.md +2025-03-21 20:00:19,218 - md-to-mdx - INFO - 处理文件: ja-jp/monitoring/README.md +2025-03-21 20:00:19,220 - md-to-mdx - INFO - 转换完成: ja-jp/monitoring/README.mdx +2025-03-21 20:00:19,221 - md-to-mdx - INFO - 已删除源文件: ja-jp/monitoring/README.md +2025-03-21 20:00:19,221 - md-to-mdx - INFO - 处理文件: ja-jp/monitoring/integrate-external-ops-tools/integrate-langfuse.md +2025-03-21 20:00:19,224 - md-to-mdx - INFO - 转换完成: ja-jp/monitoring/integrate-external-ops-tools/integrate-langfuse.mdx +2025-03-21 20:00:19,224 - md-to-mdx - INFO - 已删除源文件: ja-jp/monitoring/integrate-external-ops-tools/integrate-langfuse.md +2025-03-21 20:00:19,224 - md-to-mdx - INFO - 处理文件: ja-jp/monitoring/integrate-external-ops-tools/integrate-opik.md +2025-03-21 20:00:19,229 - md-to-mdx - INFO - 转换完成: ja-jp/monitoring/integrate-external-ops-tools/integrate-opik.mdx +2025-03-21 20:00:19,230 - md-to-mdx - INFO - 已删除源文件: ja-jp/monitoring/integrate-external-ops-tools/integrate-opik.md +2025-03-21 20:00:19,230 - md-to-mdx - INFO - 处理文件: ja-jp/monitoring/integrate-external-ops-tools/integrate-langsmith.md +2025-03-21 20:00:19,237 - md-to-mdx - INFO - 转换完成: ja-jp/monitoring/integrate-external-ops-tools/integrate-langsmith.mdx +2025-03-21 20:00:19,237 - md-to-mdx - INFO - 已删除源文件: ja-jp/monitoring/integrate-external-ops-tools/integrate-langsmith.md +2025-03-21 20:00:19,237 - md-to-mdx - INFO - 处理文件: ja-jp/monitoring/integrate-external-ops-tools/README.md +2025-03-21 20:00:19,238 - md-to-mdx - INFO - 转换完成: ja-jp/monitoring/integrate-external-ops-tools/README.mdx +2025-03-21 20:00:19,238 - md-to-mdx - INFO - 已删除源文件: ja-jp/monitoring/integrate-external-ops-tools/README.md +2025-03-21 20:01:58,005 - md-to-mdx - INFO - 处理文件: ja-jp/guides/annotation/logs.md +2025-03-21 20:01:58,009 - md-to-mdx - INFO - 转换完成: ja-jp/guides/annotation/logs.mdx +2025-03-21 20:01:58,009 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/annotation/logs.md +2025-03-21 20:01:58,010 - md-to-mdx - INFO - 处理文件: ja-jp/guides/annotation/annotation-reply.md +2025-03-21 20:01:58,012 - md-to-mdx - INFO - 转换完成: ja-jp/guides/annotation/annotation-reply.mdx +2025-03-21 20:01:58,012 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/annotation/annotation-reply.md +2025-03-21 20:01:58,012 - md-to-mdx - INFO - 处理文件: ja-jp/guides/annotation/README.md +2025-03-21 20:01:58,013 - md-to-mdx - INFO - 转换完成: ja-jp/guides/annotation/README.mdx +2025-03-21 20:01:58,013 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/annotation/README.md +2025-03-21 20:08:48,637 - md-to-mdx - INFO - 处理文件: ja-jp/guides/management/personal-account-management.md +2025-03-21 20:08:48,640 - md-to-mdx - INFO - 转换完成: ja-jp/guides/management/personal-account-management.mdx +2025-03-21 20:08:48,640 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/management/personal-account-management.md +2025-03-21 20:08:48,641 - md-to-mdx - INFO - 处理文件: ja-jp/guides/management/subscription-management.md +2025-03-21 20:08:48,642 - md-to-mdx - INFO - 转换完成: ja-jp/guides/management/subscription-management.mdx +2025-03-21 20:08:48,642 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/management/subscription-management.md +2025-03-21 20:08:48,642 - md-to-mdx - INFO - 处理文件: ja-jp/guides/management/version-control.md +2025-03-21 20:08:48,651 - md-to-mdx - INFO - 转换完成: ja-jp/guides/management/version-control.mdx +2025-03-21 20:08:48,651 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/management/version-control.md +2025-03-21 20:08:48,651 - md-to-mdx - INFO - 处理文件: ja-jp/guides/management/team-members-management.md +2025-03-21 20:08:48,652 - md-to-mdx - INFO - 转换完成: ja-jp/guides/management/team-members-management.mdx +2025-03-21 20:08:48,653 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/management/team-members-management.md +2025-03-21 20:08:48,653 - md-to-mdx - INFO - 处理文件: ja-jp/guides/management/README.md +2025-03-21 20:08:48,653 - md-to-mdx - INFO - 转换完成: ja-jp/guides/management/README.mdx +2025-03-21 20:08:48,653 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/management/README.md +2025-03-21 20:08:48,653 - md-to-mdx - INFO - 处理文件: ja-jp/guides/management/app-management.md +2025-03-21 20:08:48,654 - md-to-mdx - INFO - 转换完成: ja-jp/guides/management/app-management.mdx +2025-03-21 20:08:48,654 - md-to-mdx - INFO - 已删除源文件: ja-jp/guides/management/app-management.md diff --git a/docs.json b/docs.json index 4eda4630..da6b1fc2 100644 --- a/docs.json +++ b/docs.json @@ -38,213 +38,396 @@ "en/getting-started/install-self-hosted/readme", "en/getting-started/install-self-hosted/docker-compose", "en/getting-started/install-self-hosted/local-source-code", - "en/getting-started/install-self-hosted/bt-panel", + "en/getting-started/install-self-hosted/aa-panel", "en/getting-started/install-self-hosted/start-the-frontend-docker-container", "en/getting-started/install-self-hosted/environments", "en/getting-started/install-self-hosted/faqs" ] }, - "en/getting-started/cloud", - "en/getting-started/dify-premium-on-aws" + "en/getting-started/cloud", + "en/getting-started/dify-premium" ] }, { - "group": "User Guide", - "pages": [ - "en-us/user-guide/welcome", - { - "group": "Model", - "pages": [ - "en-us/user-guide/models/model-configuration", - "en-us/user-guide/models/new-provider", - "en-us/user-guide/models/predefined-model", - "en-us/user-guide/models/customizable-model", - "en-us/user-guide/models/interfaces", - "en-us/user-guide/models/schema", - "en-us/user-guide/models/load-balancing" - ] - }, + "group": "Guide", + "pages": [ + { + "group": "Model Configuration", + "pages": [ + "en/guides/model-configuration/readme", + "en/guides/model-configuration/new-provider", + "en/guides/model-configuration/predefined-model", + "en/guides/model-configuration/customizable-model", + "en/guides/model-configuration/interfaces", + "en/guides/model-configuration/schema", + "en/guides/model-configuration/load-balancing" + ] + }, { "group": "Application Orchestration", "pages": [ - "en-us/user-guide/build-app/chatbot", - "en-us/user-guide/build-app/text-generator", - "en-us/user-guide/build-app/agent", + "en/guides/application-orchestrate/readme", + "en/guides/application-orchestrate/chatbot", + "en/guides/application-orchestrate/text-generator", + "en/guides/application-orchestrate/agent", { - "group": "Chatflow & Workflow", + "group": "Application Toolkits", "pages": [ - "en-us/user-guide/build-app/flow-app/concepts", - "en-us/user-guide/build-app/flow-app/create-flow-app", - "en-us/user-guide/build-app/flow-app/variables", + "en/guides/application-orchestrate/app-toolkits/readme", + "en/guides/application-orchestrate/app-toolkits/moderation-tool" + ] + } + ] + }, + { + "group": "Workflow", + "pages": [ + "en/guides/workflow/README", + "en/guides/workflow/key-concepts", + "en/guides/workflow/variables", { - "group": "Nodes", + "group": "Node Description", "pages": [ - "en-us/user-guide/build-app/flow-app/nodes/start", - "en-us/user-guide/build-app/flow-app/nodes/end", - "en-us/user-guide/build-app/flow-app/nodes/answer", - "en-us/user-guide/build-app/flow-app/nodes/llm", - "en-us/user-guide/build-app/flow-app/nodes/knowledge-retrieval", - "en-us/user-guide/build-app/flow-app/nodes/question-classifier", - "en-us/user-guide/build-app/flow-app/nodes/ifelse", - "en-us/user-guide/build-app/flow-app/nodes/code", - "en-us/user-guide/build-app/flow-app/nodes/template", - "en-us/user-guide/build-app/flow-app/nodes/doc-extractor", - "en-us/user-guide/build-app/flow-app/nodes/list-operator", - "en-us/user-guide/build-app/flow-app/nodes/variable-aggregator", - "en-us/user-guide/build-app/flow-app/nodes/variable-assigner", - "en-us/user-guide/build-app/flow-app/nodes/iteration", - "en-us/user-guide/build-app/flow-app/nodes/parameter-extractor", - "en-us/user-guide/build-app/flow-app/nodes/http-request", - "en-us/user-guide/build-app/flow-app/nodes/tools" + "en/guides/workflow/nodes/start", + "en/guides/workflow/nodes/llm", + "en/guides/workflow/nodes/knowledge-retrieval", + "en/guides/workflow/nodes/question-classifier", + "en/guides/workflow/nodes/ifelse", + "en/guides/workflow/nodes/code", + "en/guides/workflow/nodes/template", + "en/guides/workflow/nodes/doc-extractor", + "en/guides/workflow/nodes/list-operator", + "en/guides/workflow/nodes/variable-aggregator", + "en/guides/workflow/nodes/variable-assigner", + "en/guides/workflow/nodes/iteration", + "en/guides/workflow/nodes/parameter-extractor", + "en/guides/workflow/nodes/http-request", + "en/guides/workflow/nodes/agent", + "en/guides/workflow/nodes/tools", + "en/guides/workflow/nodes/end", + "en/guides/workflow/nodes/answer", + "en/guides/workflow/nodes/loop" ] }, - "en-us/user-guide/build-app/flow-app/shotcut-key", - "en-us/user-guide/build-app/flow-app/orchestrate-node", - "en-us/user-guide/build-app/flow-app/file-upload", - "en-us/user-guide/build-app/flow-app/additional-features", - "en-us/user-guide/build-app/flow-app/application-publishing" + "en/guides/workflow/shortcut-key", + "en/guides/workflow/orchestrate-node", + "en/guides/workflow/file-upload", + { + "group": "Error Handling", + "pages": [ + "en/guides/workflow/error-handling/readme", + "en/guides/workflow/error-handling/predefined-error-handling-logic", + "en/guides/workflow/error-handling/error-type" + ] + }, + "en/guides/workflow/additional-features", + { + "group": "Debug and Preview", + "pages": [ + "en/guides/workflow/debug-and-preview/preview-and-run", + "en/guides/workflow/debug-and-preview/step-run", + "en/guides/workflow/debug-and-preview/log", + "en/guides/workflow/debug-and-preview/checklist", + "en/guides/workflow/debug-and-preview/history" + ] + }, + "en/guides/workflow/publish", + "en/guides/workflow/bulletin" ] - } - ] - }, - { - "group": "Debug and Preview", - "pages": [ - { - "group": "Chatflow & Workflow", - "pages": [ - "en-us/user-guide/debug-app/chatflow-and-workflow/preview-and-run", - "en-us/user-guide/debug-app/chatflow-and-workflow/step-run", - "en-us/user-guide/debug-app/chatflow-and-workflow/log", - "en-us/user-guide/debug-app/chatflow-and-workflow/checklist", - "en-us/user-guide/debug-app/chatflow-and-workflow/history" - ] - } - ] - }, - { - "group": "Application Publishing", - "pages": [ - { - "group": "Publish as a Single-page Web App", - "pages": [ - "en-us/user-guide/application-publishing/launch-your-webapp-quickly/web-app-settings", - "en-us/user-guide/application-publishing/launch-your-webapp-quickly/text-generator", - "en-us/user-guide/application-publishing/launch-your-webapp-quickly/conversation-application" - ] - }, - "en-us/user-guide/application-publishing/embedding-in-websites", - "en-us/user-guide/application-publishing/developing-with-apis", - "en-us/user-guide/application-publishing/based-on-frontend-templates" - ] - }, - { - "group": "Management", - "pages": [ - "en-us/management/app-management", - "en-us/management/team-members-management", - "en-us/management/personal-account-management", - "en-us/management/subscription-management", - "en-us/management/version-control" - ] - }, - { - "group": "Monitoring", - "pages": [ - "en-us/user-guide/monitoring/analysis", - "en-us/user-guide/monitoring/logs", - "en-us/user-guide/monitoring/annotation-reply", - { - "group": "Integrate External Ops Tools", - "pages": [ - "en-us/user-guide/monitoring/integrate-external-ops-tools/integrate-langfuse", - "en-us/user-guide/monitoring/integrate-external-ops-tools/integrate-langsmith" - ] - } - ] }, { "group": "Knowledge", "pages": [ - "en-us/user-guide/knowledge-base/readme", + "en/guides/knowledge-base/readme", { "group": "Create Knowledge", "pages": [ - "en-us/user-guide/knowledge-base/knowledge-base-creation/introduction", + "en/guides/knowledge-base/knowledge-base-creation/introduction", { "group": "1. Import Text Data", "pages": [ - "en-us/user-guide/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme", - "en-us/user-guide/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion", - "en-us/user-guide/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website" + "en/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme", + "en/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion", + "en/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website" ] }, - "en-us/user-guide/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text", - "en-us/user-guide/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods" + "en/guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text", + "en/guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods" ] }, { "group": "Manage Knowledge", "pages": [ - "en-us/user-guide/knowledge-base/knowledge-and-documents-maintenance/introduction", - "en-us/user-guide/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents", - "en-us/user-guide/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api" + "en/guides/knowledge-base/knowledge-and-documents-maintenance/introduction", + "en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents", + "en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api" ] }, - "en-us/user-guide/knowledge-base/metadata", - "en-us/user-guide/knowledge-base/integrate-knowledge-within-application", - "en-us/user-guide/knowledge-base/retrieval-test-and-citation", - "en-us/user-guide/knowledge-base/connect-external-knowledge-base", - "en-us/user-guide/knowledge-base/external-knowledge-api" + "en/guides/knowledge-base/metadata", + "en/guides/knowledge-base/integrate-knowledge-within-application", + "en/guides/knowledge-base/retrieval-test-and-citation", + "en/guides/knowledge-base/connect-external-knowledge-base", + "en/guides/knowledge-base/external-knowledge-api" ] }, { - "group": "Tools", + "group": "Publishing", "pages": [ - "en-us/user-guide/tools/introduction", { - "group": "Tool Configuration", + "group": "Publish as a Web App", "pages": [ + "en/guides/application-publishing/launch-your-webapp-quickly/web-app-settings", + "en/guides/application-publishing/launch-your-webapp-quickly/text-generator", + "en/guides/application-publishing/launch-your-webapp-quickly/conversation-application" + ] + }, + "en/guides/application-publishing/embedding-in-websites", + "en/guides/application-publishing/developing-with-apis", + "en/guides/application-publishing/based-on-frontend-templates" + ] + }, + { + "group": "Annotation", + "pages": [ + "en/guides/annotation/logs", + "en/guides/annotation/annotation-reply" + ] + }, + { + "group": "Monitoring", + "pages": [ + "en/guides/monitoring/readme", + "en/guides/monitoring/analysis", + { + "group": "Integrate External Ops Tools", + "pages": [ + "en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse", + "en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith" + ] + } + ] + }, + { + "group": "Collaboration", + "pages": [ + "en/guides/workspace/app", + "en/guides/workspace/invite-and-manage-members" + ] + }, + { + "group": "Management", + "pages": [ + "en/guides/management/app-management", + "en/guides/management/team-members-management", + "en/guides/management/personal-account-management", + "en/guides/management/subscription-management", + "en/guides/management/version-control" + ] + } + ] + }, + { + "group": "Workshop", + "pages": [ + "en/workshop/README", + { + "group": "Basic", + "pages": [ + "en/workshop/basic/build-ai-image-generation-app" + ] + }, + { + "group": "Intermediate", + "pages": [ + "en/workshop/intermediate/article-reader", + "en/workshop/intermediate/customer-service-bot", + "en/workshop/intermediate/twitter-chatflow" + + ] + } + ] + }, + { + "group": "Community", + "pages": [ + "en/community/support", + "en/community/contribution", + "en/community/docs-contribution" + ] + }, + { + "group": "Plugins", + "pages": [ + "en/plugins/introduction", + { + "group": "Quick Start", + "pages": [ + "en/plugins/quick-start/README", + "en/plugins/quick-start/install-plugins", + { + "group": "Develop Plugins", + "pages": [ + "en/plugins/quick-start/develop-plugins/README", + "en/plugins/quick-start/develop-plugins/initialize-development-tools", + "en/plugins/quick-start/develop-plugins/tool-plugin", { - "group": "Dify Official Tools", + "group": "Model Plugin", "pages": [ - "en-us/user-guide/tools/dify/google", - "en-us/user-guide/tools/dify/bing", - "en-us/user-guide/tools/dify/perplexity", - "en-us/user-guide/tools/dify/stable-diffusion" + "en/plugins/quick-start/develop-plugins/model-plugin/README", + "en/plugins/quick-start/develop-plugins/model-plugin/create-model-providers", + "en/plugins/quick-start/develop-plugins/model-plugin/predefined-model", + "en/plugins/quick-start/develop-plugins/model-plugin/customizable-model" ] }, - { - "group": "Community Tools", - "pages": [ - "en-us/user-guide/tools/community/searchapi", - "en-us/user-guide/tools/community/alphavantage", - "en-us/user-guide/tools/community/comfyui", - "en-us/user-guide/tools/community/searxng", - "en-us/user-guide/tools/community/serper", - "en-us/user-guide/tools/community/siliconflow" - ] - } + "en/plugins/quick-start/develop-plugins/agent-strategy-plugin", + "en/plugins/quick-start/develop-plugins/extension-plugin", + "en/plugins/quick-start/develop-plugins/bundle" ] }, - "en-us/user-guide/tools/quick-tool-integration", - "en-us/user-guide/tools/advanced-tool-integration" + "en/plugins/quick-start/debug-plugin" + ] + }, + "en/plugins/manage-plugins", + { + "group": "Schema Specification", + "pages": [ + "en/plugins/schema-definition/manifest", + "en/plugins/schema-definition/endpoint", + "en/plugins/schema-definition/tool", + "en/plugins/schema-definition/agent", + { + "group": "Model", + "pages": [ + "en/plugins/schema-definition/model/model-designing-rules", + "en/plugins/schema-definition/model/model-schema" + ] + }, + "en/plugins/schema-definition/general-specifications", + "en/plugins/schema-definition/persistent-storage", + { + "group": "Reverse Invocation of the Dify Service", + "pages": [ + "en/plugins/schema-definition/reverse-invocation-of-the-dify-service/README", + "en/plugins/schema-definition/reverse-invocation-of-the-dify-service/app", + "en/plugins/schema-definition/reverse-invocation-of-the-dify-service/model", + "en/plugins/schema-definition/reverse-invocation-of-the-dify-service/tool", + "en/plugins/schema-definition/reverse-invocation-of-the-dify-service/node" + ] + } ] }, { - "group": "API", + "group": "Best Practice", "pages": [ - "en-us/user-guide/api-documentation/text-generator", - "en-us/user-guide/api-documentation/chatbot", - "en-us/user-guide/api-documentation/workflow", - "en-us/user-guide/api-documentation/maintain-dataset-via-api", - "en-us/user-guide/api-documentation/external-knowledge-api-documentation" + "en/plugins/best-practice/README", + "en/plugins/best-practice/develop-a-slack-bot-plugin" + ] + }, + { + "group": "Publish Plugins", + "pages": [ + "en/plugins/publish-plugins/README", + { + "group": "Publish to Dify Marketplace", + "pages": [ + "en/plugins/publish-plugins/publish-to-dify-marketplace/README", + "en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines", + "en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines" + ] + }, + "en/plugins/publish-plugins/publish-plugin-on-personal-github-repo", + "en/plugins/publish-plugins/package-plugin-file-and-publish" + ] + }, + "en/plugins/faq" + ] + }, + { + "group": "Development", + "pages": [ + { + "group": "DifySandbox", + "pages": [ + "en/development/backend/sandbox/README", + "en/development/backend/sandbox/contribution" + ] + }, + { + "group": "Models Integration", + "pages": [ + "en/development/models-integration/hugging-face", + "en/development/models-integration/replicate", + "en/development/models-integration/xinference", + "en/development/models-integration/openllm", + "en/development/models-integration/localai", + "en/development/models-integration/ollama", + "en/development/models-integration/litellm", + "en/development/models-integration/gpustack", + "en/development/models-integration/aws-bedrock-deepseek" + ] + }, + { + "group": "Migration", + "pages": [ + "en/development/migration/migrate-to-v1" + ] + } + ] + }, + { + "group": "Learn More", + "pages": [ + { + "group": "Use Cases", + "pages": [ + "en/learn-more/use-cases/integrate-deepseek-to-build-an-ai-app", + "en/learn-more/use-cases/private-ai-ollama-deepseek-dify", + "en/learn-more/use-cases/build-an-notion-ai-assistant", + "en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify", + "en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes", + "en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website", + "en/learn-more/use-cases/how-to-connect-aws-bedrock", + "en/learn-more/use-cases/dify-schedule", + "en/learn-more/use-cases/building-an-ai-thesis-slack-bot" + ] + }, + { + "group": "Extended Reading", + "pages": [ + "en/learn-more/extended-reading/what-is-llmops", + { + "group": "Retrieval-Augmented Generation (RAG)", + "pages": [ + "en/learn-more/extended-reading/retrieval-augment/README", + "en/learn-more/extended-reading/retrieval-augment/hybrid-search", + "en/learn-more/extended-reading/retrieval-augment/rerank", + "en/learn-more/extended-reading/retrieval-augment/retrieval" + ] + }, + "en/learn-more/extended-reading/how-to-use-json-schema-in-dify" ] }, { "group": "FAQ", "pages": [ - "en-us/faq/llm-using" + "en/learn-more/faq/install-faq", + "en/learn-more/faq/use-llms-faq", + "en/learn-more/faq/plugins" + ] + } + ] + }, + { + "group": "Policies", + "pages": [ + "en/policies/open-source", + { + "group": "User Agreement", + "pages": [ + "en/policies/agreement/README", + "https://dify.ai/terms", + "https://dify.ai/privacy", + "en/policies/agreement/get-compliance-report" ] } ] @@ -306,6 +489,7 @@ "group": "构建应用", "pages": [ "zh-hans/guides/application-orchestrate/readme", + "zh-hans/guides/application-orchestrate/creating-an-application", { "group": "聊天助手", "pages": [ @@ -340,7 +524,7 @@ "zh-hans/guides/workflow/nodes/template", "zh-hans/guides/workflow/nodes/doc-extractor", "zh-hans/guides/workflow/nodes/list-operator", - "zh-hans/guides/workflow/nodes/variable-aggregation", + "zh-hans/guides/workflow/nodes/variable-aggregator", "zh-hans/guides/workflow/nodes/variable-assigner", "zh-hans/guides/workflow/nodes/iteration", "zh-hans/guides/workflow/nodes/parameter-extractor", @@ -348,7 +532,8 @@ "zh-hans/guides/workflow/nodes/agent", "zh-hans/guides/workflow/nodes/tools", "zh-hans/guides/workflow/nodes/end", - "zh-hans/guides/workflow/nodes/answer" + "zh-hans/guides/workflow/nodes/answer", + "zh-hans/guides/workflow/nodes/loop" ] }, "zh-hans/guides/workflow/shortcut-key", @@ -468,14 +653,14 @@ ] } ] - }, + }, { "group": "协同", "pages": [ "zh-hans/guides/workspace/app", "zh-hans/guides/workspace/invite-and-manage-members" ] - }, + }, { "group": "管理", "pages": [ @@ -716,215 +901,228 @@ "tab": "ドキュメント", "groups": [ { - "group": "はじめに", + "group": "入門", "pages": [ - "ja-jp/introduction" + { + "group": "Difyへようこそ", + "pages": [ + "ja-jp/introduction", + "ja-jp/getting-started/readme/features-and-specifications", + "ja-jp/getting-started/readme/model-providers" + ] + }, + "ja-jp/getting-started/cloud", + { + "group": "Dify 社区版", + "pages": [ + "ja-jp/getting-started/install-self-hosted/readme", + "ja-jp/getting-started/install-self-hosted/docker-compose", + "ja-jp/getting-started/install-self-hosted/local-source-code", + "ja-jp/getting-started/install-self-hosted/bt-panel", + "ja-jp/getting-started/install-self-hosted/start-the-frontend-docker-container", + "ja-jp/getting-started/install-self-hosted/environments", + "ja-jp/getting-started/install-self-hosted/faq" + ] + }, + "ja-jp/getting-started/dify-premium" ] }, { - "group": "ユーザーマニュアル", + "group": "マニュアル", "pages": [ - "ja-jp/user-guide/welcome", { - "group": "モデルの接続", + "group": "モデル", "pages": [ - "ja-jp/user-guide/models/model-configuration", - "ja-jp/user-guide/models/new-provider", - "ja-jp/user-guide/models/predefined-model", - "ja-jp/user-guide/models/customizable-model", - "ja-jp/user-guide/models/interfaces", - "ja-jp/user-guide/models/schema", - "ja-jp/user-guide/models/load-balancing" + "ja-jp/guides/model-configuration/readme", + "ja-jp/guides/model-configuration/new-provider", + "ja-jp/guides/model-configuration/predefined-model", + "ja-jp/guides/model-configuration/customizable-model", + "ja-jp/guides/model-configuration/interfaces", + "ja-jp/guides/model-configuration/schema", + "ja-jp/guides/model-configuration/load-balancing" ] }, { - "group": "アプリの構築", + "group": "アプリ・オーケストレーション", "pages": [ - "ja-jp/user-guide/build-app/chatbot", - "ja-jp/user-guide/build-app/text-generator", - "ja-jp/user-guide/build-app/agent", + "ja-jp/guides/application-orchestrate/readme", + "ja-jp/guides/application-orchestrate/creating-an-application", { - "group": "チャットフロー & ワークフロー", + "group": "チャットボット", "pages": [ - "ja-jp/user-guide/build-app/flow-app/concepts", - "ja-jp/user-guide/build-app/flow-app/create-flow-app", - "ja-jp/user-guide/build-app/flow-app/variables", + "ja-jp/guides/application-orchestrate/chatbot-application", + "ja-jp/guides/application-orchestrate/multiple-llms-debugging" + ] + }, + "ja-jp/guides/application-orchestrate/agent", + { + "group": "ツールキット", + "pages": [ + "ja-jp/guides/application-orchestrate/app-toolkits/README", + "ja-jp/guides/application-orchestrate/app-toolkits/moderation-tool" + ] + } + ] + }, + { + "group": "ワークフロー", + "pages": [ + "ja-jp/guides/workflow/concepts", + "ja-jp/guides/workflow/variables", { "group": "ノードの説明", "pages": [ - "ja-jp/user-guide/build-app/flow-app/nodes/start", - "ja-jp/user-guide/build-app/flow-app/nodes/end", - "ja-jp/user-guide/build-app/flow-app/nodes/answer", - "ja-jp/user-guide/build-app/flow-app/nodes/llm", - "ja-jp/user-guide/build-app/flow-app/nodes/knowledge-retrieval", - "ja-jp/user-guide/build-app/flow-app/nodes/question-classifier", - "ja-jp/user-guide/build-app/flow-app/nodes/ifelse", - "ja-jp/user-guide/build-app/flow-app/nodes/code", - "ja-jp/user-guide/build-app/flow-app/nodes/template", - "ja-jp/user-guide/build-app/flow-app/nodes/doc-extractor", - "ja-jp/user-guide/build-app/flow-app/nodes/list-operator", - "ja-jp/user-guide/build-app/flow-app/nodes/variable-aggregation", - "ja-jp/user-guide/build-app/flow-app/nodes/variable-assigner", - "ja-jp/user-guide/build-app/flow-app/nodes/iteration", - "ja-jp/user-guide/build-app/flow-app/nodes/parameter-extractor", - "ja-jp/user-guide/build-app/flow-app/nodes/http-request", - "ja-jp/user-guide/build-app/flow-app/nodes/tools" + "ja-jp/guides/workflow/nodes/start", + "ja-jp/guides/workflow/nodes/end", + "ja-jp/guides/workflow/nodes/answer", + "ja-jp/guides/workflow/nodes/llm", + "ja-jp/guides/workflow/nodes/knowledge-retrieval", + "ja-jp/guides/workflow/nodes/question-classifier", + "ja-jp/guides/workflow/nodes/ifelse", + "ja-jp/guides/workflow/nodes/code", + "ja-jp/guides/workflow/nodes/template", + "ja-jp/guides/workflow/nodes/doc-extractor", + "ja-jp/guides/workflow/nodes/list-operator", + "ja-jp/guides/workflow/nodes/variable-aggregator", + "ja-jp/guides/workflow/nodes/variable-assigner", + "ja-jp/guides/workflow/nodes/iteration", + "ja-jp/guides/workflow/nodes/parameter-extractor", + "ja-jp/guides/workflow/nodes/http-request", + "ja-jp/guides/workflow/nodes/agent", + "ja-jp/guides/workflow/nodes/tools", + "ja-jp/guides/workflow/nodes/loop" ] }, - "ja-jp/user-guide/build-app/flow-app/orchestrate-node", - "ja-jp/user-guide/build-app/flow-app/file-upload", - "ja-jp/user-guide/build-app/flow-app/additional-feature", - "ja-jp/user-guide/build-app/flow-app/application-publishing" + "ja-jp/guides/workflow/shortcut-key", + "ja-jp/guides/workflow/orchestrate-node", + "ja-jp/guides/workflow/file-upload", + { + "group": "エラー処理", + "pages": [ + "ja-jp/guides/workflow/error-handling/readme", + "ja-jp/guides/workflow/error-handling/predefined-nodes-failure-logic", + "ja-jp/guides/workflow/error-handling/error-type" + ] + }, + "ja-jp/guides/workflow/additional-feature", + { + "group": "プレビューとデバッグ", + "pages": [ + "ja-jp/guides/workflow/debug-and-preview/step-run", + "ja-jp/guides/workflow/debug-and-preview/log", + "ja-jp/guides/workflow/debug-and-preview/checklist", + "ja-jp/guides/workflow/debug-and-preview/history" + ] + }, + "ja-jp/guides/workflow/publish", + "ja-jp/guides/workflow/bulletin" ] - } - ] }, { - "group": "アプリのデバッグ", + "group": "ナレッジベース", "pages": [ + "ja-jp/guides/knowledge-base/readme", { - "group": "チャットフロー & ワークフロー", + "group": "ナレッジベース作成", "pages": [ - "ja-jp/user-guide/debug-app/chatflow-and-workflow/preview-and-run", - "ja-jp/user-guide/debug-app/chatflow-and-workflow/step-run", - "ja-jp/user-guide/debug-app/chatflow-and-workflow/log", - "ja-jp/user-guide/debug-app/chatflow-and-workflow/checklist", - "ja-jp/user-guide/debug-app/chatflow-and-workflow/history" - ] - } - ] - }, - { - "group": "アプリの発表", - "pages": [ - { - "group": "公開Webアプリとしてのリリース", - "pages": [ - "ja-jp/user-guide/application-publishing/launch-your-webapp-quickly/web-app-settings", - "ja-jp/user-guide/application-publishing/launch-your-webapp-quickly/text-generator", - "ja-jp/user-guide/application-publishing/launch-your-webapp-quickly/conversation-application" + "ja-jp/guides/knowledge-base/knowledge-base-creation/introduction", + { + "group": "1. オンラインデータソースの活用", + "pages": [ + "ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme", + "ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion", + "ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website" + ] + }, + "ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text", + "ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods" ] }, - "ja-jp/user-guide/application-publishing/embedding-in-websites", - "ja-jp/user-guide/application-publishing/developing-with-apis", - "ja-jp/user-guide/application-publishing/based-on-frontend-templates" + { + "group": "ナレッジベースの管理", + "pages": [ + "ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/introduction", + "ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents", + "ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api" + ] + }, + "ja-jp/guides/knowledge-base/metadata", + "ja-jp/guides/knowledge-base/integrate-knowledge-within-application", + "ja-jp/guides/knowledge-base/retrieval-test-and-citation", + "ja-jp/guides/knowledge-base/connect-external-knowledge-base", + "ja-jp/guides/knowledge-base/api-documentation/external-knowledge-api-documentation" + ] + }, + { + "group": "アプリ公開", + "pages": [ + { + "group": "シングルページWebアプリとして公開", + "pages": [ + "ja-jp/guides/application-publishing/launch-your-webapp-quickly/web-app-settings", + "ja-jp/guides/application-publishing/launch-your-webapp-quickly/text-generator", + "ja-jp/guides/application-publishing/launch-your-webapp-quickly/conversation-application" + ] + }, + "ja-jp/guides/application-publishing/embedding-in-websites", + "ja-jp/guides/application-publishing/developing-with-apis", + "ja-jp/guides/application-publishing/based-on-frontend-templates" + ] + }, + { + "group": "アノテーション", + "pages": [ + "ja-jp/guides/annotation/logs", + "ja-jp/guides/annotation/annotation-reply" + ] + }, + { + "group": "モニタリング", + "pages": [ + "ja-jp/guides/monitoring/analysis", + { + "group": "外部Opsツール統合", + "pages": [ + "ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-langfuse", + "ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-langsmith" + ] + } + ] + }, + { + "group": "扩展", + "pages": [ + { + "group": "API 扩展", + "pages": [ + "zh-hans/guides/tools/extensions/api-based/api-based-extension", + "zh-hans/guides/tools/extensions/api-based/external-data-tool", + "zh-hans/guides/tools/extensions/api-based/cloudflare-workers", + "zh-hans/guides/tools/extensions/api-based/moderation" + ] + }, + { + "group": "代码扩展", + "pages": [ + "zh-hans/guides/tools/extensions/code-based/external-data-tool", + "zh-hans/guides/tools/extensions/code-based/moderation" + ] + } + ] + }, + { + "group": "コラボレーション", + "pages": [ + "ja-jp/guides/workspace/app" ] }, { "group": "管理", "pages": [ - "ja-jp/management/app-management", - "ja-jp/management/team-members-management", - "ja-jp/management/personal-account-management", - "ja-jp/management/version-control" - ] - }, - { - "group": "アプリのモニタリング", - "pages": [ - "ja-jp/user-guide/monitoring/analysis", - "ja-jp/user-guide/monitoring/logs", - "ja-jp/user-guide/monitoring/annotation-reply", - { - "group": "外部ツールとOpsツールの統合", - "pages": [ - "ja-jp/user-guide/monitoring/integrate-external-ops-tools/integrate-langfuse", - "ja-jp/user-guide/monitoring/integrate-external-ops-tools/integrate-langsmith" - ] - } - ] - }, - { - "group": "ナレッジベース", - "pages": [ - { - "group": "ナレッジベースの作成", - "pages": [ - "ja-jp/user-guide/knowledge-base/knowledge-base-creation/upload-documents", - "ja-jp/user-guide/knowledge-base/knowledge-base-creation/sync-from-notion", - "ja-jp/user-guide/knowledge-base/knowledge-base-creation/sync-from-website", - "ja-jp/user-guide/knowledge-base/knowledge-base-creation/connect-external-knowledge-base" - ] - }, - { - "group": "インデックスと検索", - "pages": [ - "ja-jp/user-guide/knowledge-base/indexing-and-retrieval/retrieval-augment", - "ja-jp/user-guide/knowledge-base/indexing-and-retrieval/hybrid-search", - "ja-jp/user-guide/knowledge-base/indexing-and-retrieval/rerank", - "ja-jp/user-guide/knowledge-base/indexing-and-retrieval/retrieval" - ] - }, - "ja-jp/user-guide/knowledge-base/retrieval-test-and-citation", - "ja-jp/user-guide/knowledge-base/knowledge-and-documents-maintenance", - "ja-jp/user-guide/knowledge-base/integrate-knowledge-within-application", - "ja-jp/user-guide/knowledge-base/faq" - ] - }, - { - "group": "ツール拡張", - "pages": [ - "ja-jp/user-guide/tools/introduction", - { - "group": "ツールの構成", - "pages": [ - { - "group": "Difyオフィシャルツール", - "pages": [ - "ja-jp/user-guide/tools/dify/google", - "ja-jp/user-guide/tools/dify/bing", - "ja-jp/user-guide/tools/dify/dall-e", - "ja-jp/user-guide/tools/dify/perplexity", - "ja-jp/user-guide/tools/dify/stable-diffusion" - ] - }, - { - "group": "コミュニティツール", - "pages": [ - "ja-jp/user-guide/tools/community/searchapi", - "ja-jp/user-guide/tools/community/alphavantage", - "ja-jp/user-guide/tools/community/comfyui", - "ja-jp/user-guide/tools/community/searxng", - "ja-jp/user-guide/tools/community/serper", - "ja-jp/user-guide/tools/community/siliconflow" - ] - } - ] - }, - "ja-jp/user-guide/tools/quick-tool-integration", - "ja-jp/user-guide/tools/advanced-tool-integration", - { - "group": "API 拡張子", - "pages": [ - "ja-jp/user-guide/tools/extensions/api-based/api-based-extension", - "ja-jp/user-guide/tools/extensions/api-based/external-data-tool", - "ja-jp/user-guide/tools/extensions/api-based/cloudflare-workers", - "ja-jp/user-guide/tools/extensions/api-based/moderation" - ] - }, - { - "group": "コード拡張子", - "pages": [ - "ja-jp/user-guide/tools/extensions/code-based/external-data-tool", - "ja-jp/user-guide/tools/extensions/code-based/moderation" - ] - } - ] - }, - { - "group": "API ドキュメント", - "pages": [ - "ja-jp/user-guide/api-documentation/text-generator", - "ja-jp/user-guide/api-documentation/chatbot", - "ja-jp/user-guide/api-documentation/workflow", - "ja-jp/user-guide/api-documentation/knowledge-base", - "ja-jp/user-guide/api-documentation/external-knowledge-api-documentation" - ] - }, - { - "group": "FAQ", - "pages": [ - "ja-jp/user-guide/faq/llm-using" + "ja-jp/guides/management/app-management", + "ja-jp/guides/management/team-members-management", + "ja-jp/guides/management/personal-account-management", + "ja-jp/guides/management/version-control" ] } ] @@ -932,13 +1130,232 @@ ] }, { - "tab": "API リファレンス", - "openapi": "https://assets-docs.dify.ai/2025/03/d497c10fe2c248a01ac93f6cfdf210b1.json" + "group": "ハンドオン工房", + "pages": [ + "zh-hans/workshop/readme", + { + "group": "初級編", + "pages": [ + "zh-hans/workshop/basic/build-ai-image-generation-app", + "zh-hans/workshop/basic/travel-assistant" + ] + }, + { + "group": "中級編", + "pages": [ + "zh-hans/workshop/intermediate/article-reader", + "zh-hans/workshop/intermediate/customer-service-bot", + "zh-hans/workshop/intermediate/twitter-chatflow" + ] + } + ] + }, + { + "group": "コミュニティ", + "pages": [ + "zh-hans/community/support", + "zh-hans/community/contribution", + "zh-hans/community/docs-contribution" + ] + }, + { + "group": "プラグイン", + "pages": [ + "zh-hans/plugins/introduction", + { + "group": "クイックスタート", + "pages": [ + "zh-hans/plugins/quick-start/README", + "zh-hans/plugins/quick-start/install-plugins", + { + "group": "プラグイン開発の入門", + "pages": [ + "zh-hans/plugins/quick-start/develop-plugins/README", + "zh-hans/plugins/quick-start/develop-plugins/initialize-development-tools", + "zh-hans/plugins/quick-start/develop-plugins/tool-plugin", + { + "group": "Model 插件", + "pages": [ + "zh-hans/plugins/quick-start/develop-plugins/model-plugin/README", + "zh-hans/plugins/quick-start/develop-plugins/model-plugin/create-model-providers", + "zh-hans/plugins/quick-start/develop-plugins/model-plugin/integrate-the-predefined-model", + "zh-hans/plugins/quick-start/develop-plugins/model-plugin/customizable-model" + ] + }, + "zh-hans/plugins/quick-start/develop-plugins/agent-strategy-plugin", + "zh-hans/plugins/quick-start/develop-plugins/extension-plugin", + "zh-hans/plugins/quick-start/develop-plugins/bundle" + ] + }, + "zh-hans/plugins/quick-start/debug-plugin" + ] + }, + "zh-hans/plugins/manage-plugins", + { + "group": "接口定义", + "pages": [ + "zh-hans/plugins/schema-definition/README", + "zh-hans/plugins/schema-definition/manifest", + "zh-hans/plugins/schema-definition/endpoint", + "zh-hans/plugins/schema-definition/tool", + "zh-hans/plugins/schema-definition/agent", + { + "group": "Model", + "pages": [ + "zh-hans/plugins/schema-definition/model/README", + "zh-hans/plugins/schema-definition/model/model-designing-rules", + "zh-hans/plugins/schema-definition/model/model-schema" + ] + }, + "zh-hans/plugins/schema-definition/general-specifications", + "zh-hans/plugins/schema-definition/persistent-storage", + { + "group": "反向调用 Dify 服务", + "pages": [ + "zh-hans/plugins/schema-definition/reverse-invocation-of-the-dify-service/README", + "zh-hans/plugins/schema-definition/reverse-invocation-of-the-dify-service/app", + "zh-hans/plugins/schema-definition/reverse-invocation-of-the-dify-service/model", + "zh-hans/plugins/schema-definition/reverse-invocation-of-the-dify-service/tool", + "zh-hans/plugins/schema-definition/reverse-invocation-of-the-dify-service/node" + ] + } + ] + }, + { + "group": "最佳实践", + "pages": [ + "zh-hans/plugins/best-practice/README", + "zh-hans/plugins/best-practice/develop-a-slack-bot-plugin" + ] + }, + { + "group": "发布插件", + "pages": [ + "zh-hans/plugins/publish-plugins/README", + { + "group": "发布至 Dify Marketplace", + "pages": [ + "zh-hans/plugins/publish-plugins/publish-to-dify-marketplace/README", + "zh-hans/plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines", + "zh-hans/plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines" + ] + }, + "zh-hans/plugins/publish-plugins/publish-plugin-on-personal-github-repo", + "zh-hans/plugins/publish-plugins/package-plugin-file-and-publish" + ] + }, + "zh-hans/plugins/faq" + ] + }, + { + "group": "開発", + "pages": [ + { + "group": "DifySandbox", + "pages": [ + "zh-hans/development/backend/sandbox/README", + "zh-hans/development/backend/sandbox/contribution" + ] + }, + { + "group": "模型接入", + "pages": [ + "zh-hans/development/models-integration/hugging-face", + "zh-hans/development/models-integration/replicate", + "zh-hans/development/models-integration/xinference", + "zh-hans/development/models-integration/openllm", + "zh-hans/development/models-integration/localai", + "zh-hans/development/models-integration/ollama", + "zh-hans/development/models-integration/litellm", + "zh-hans/development/models-integration/gpustack", + "zh-hans/development/models-integration/aws-bedrock-deepseek" + ] + }, + { + "group": "迁移", + "pages": [ + "zh-hans/development/migration/migrate-to-v1" + ] + } + ] + }, + { + "group": "もっと読む", + "pages": [ + { + "group": "应用案例", + "pages": [ + "zh-hans/learn-more/use-cases/integrate-deepseek-to-build-an-ai-app", + "zh-hans/learn-more/use-cases/private-ai-ollama-deepseek-dify", + "zh-hans/learn-more/use-cases/train-a-qa-chatbot-that-belongs-to-you", + "zh-hans/learn-more/use-cases/create-a-midjoureny-prompt-word-robot-with-zero-code", + "zh-hans/learn-more/use-cases/build-an-notion-ai-assistant", + "zh-hans/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes", + "zh-hans/learn-more/use-cases/practical-implementation-of-building-llm-applications-using-a-full-set-of-open-source-tools", + "zh-hans/learn-more/use-cases/dify-on-wechat", + "zh-hans/learn-more/use-cases/dify-on-dingtalk", + "zh-hans/learn-more/use-cases/dify-on-teams", + "zh-hans/learn-more/use-cases/how-to-make-llm-app-provide-a-progressive-chat-experience", + "zh-hans/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website", + "zh-hans/learn-more/use-cases/how-to-connect-aws-bedrock", + "zh-hans/learn-more/use-cases/dify-schedule", + "zh-hans/learn-more/use-cases/dify-model-arena" + ] + }, + { + "group": "扩展阅读", + "pages": [ + "zh-hans/learn-more/extended-reading/what-is-llmops", + "zh-hans/learn-more/extended-reading/what-is-array-variable", + { + "group": "检索增强生成(RAG)", + "pages": [ + "zh-hans/learn-more/extended-reading/retrieval-augment/README", + "zh-hans/learn-more/extended-reading/retrieval-augment/hybrid-search", + "zh-hans/learn-more/extended-reading/retrieval-augment/rerank", + "zh-hans/learn-more/extended-reading/retrieval-augment/retrieval" + ] + }, + "zh-hans/learn-more/extended-reading/prompt-engineering", + "zh-hans/learn-more/extended-reading/how-to-use-json-schema-in-dify" + ] + }, + { + "group": "常见问题", + "pages": [ + "zh-hans/learn-more/faq/README", + "zh-hans/learn-more/faq/install-faq", + "zh-hans/learn-more/faq/llms-use-faq", + "zh-hans/learn-more/faq/plugins" + ] + } + ] + }, + { + "group": "ポリシー", + "pages": [ + "zh-hans/policies/open-source", + { + "group": "用户协议", + "pages": [ + "zh-hans/policies/agreement/README", + "https://dify.ai/terms", + "https://dify.ai/privacy", + "zh-hans/policies/agreement/get-compliance-report" + ] + } + ] } ] } ] }, + "redirects": [ + { + "source": "/getting-started/readme/features-and-specifications", + "destination": "/en/getting-started/readme/features-and-specifications" + } +], "navbar": { "links": [ { diff --git a/en/SUMMARY.md b/en/SUMMARY.md new file mode 100644 index 00000000..2d08be43 --- /dev/null +++ b/en/SUMMARY.md @@ -0,0 +1,249 @@ +# Table of contents + +## Getting Started + +* [Welcome to Dify](README.md) + * [Features and Specifications](getting-started/readme/features-and-specifications.md) + * [List of Model Providers](getting-started/readme/model-providers.md) +* [Dify Community](getting-started/install-self-hosted/README.md) + * [Deploy with Docker Compose](getting-started/install-self-hosted/docker-compose.md) + * [Start with Local Source Code](getting-started/install-self-hosted/local-source-code.md) + * [Deploy with aaPanel](getting-started/install-self-hosted/bt-panel.md) + * [Start Frontend Docker Container Separately](getting-started/install-self-hosted/start-the-frontend-docker-container.md) + * [Environment Variables Explanation](getting-started/install-self-hosted/environments.md) + * [FAQs](getting-started/install-self-hosted/faqs.md) +* [Dify Cloud](getting-started/cloud.md) +* [Dify Premium on AWS](getting-started/dify-premium-on-aws.md) + +## Guides + +* [Model](guides/model-configuration/README.md) + * [Add New Provider](guides/model-configuration/new-provider.md) + * [Predefined Model Integration](guides/model-configuration/predefined-model.md) + * [Custom Model Integration](guides/model-configuration/customizable-model.md) + * [Interfaces](guides/model-configuration/interfaces.md) + * [Schema](guides/model-configuration/schema.md) + * [Load Balancing](guides/model-configuration/load-balancing.md) +* [Application Orchestration](guides/application-orchestrate/README.md) + * [Create Application](guides/application-orchestrate/creating-an-application.md) + * [Chatbot Application](guides/application-orchestrate/chatbot-application.md) + * [Multiple Model Debugging](guides/application-orchestrate/multiple-llms-debugging.md) + * [Agent](guides/application-orchestrate/agent.md) + * [Application Toolkits](guides/application-orchestrate/app-toolkits/README.md) + * [Moderation Tool](guides/application-orchestrate/app-toolkits/moderation-tool.md) +* [Workflow](guides/workflow/README.md) + * [Key Concepts](guides/workflow/key-concepts.md) + * [Variables](guides/workflow/variables.md) + * [Node Description](guides/workflow/node/README.md) + * [Start](guides/workflow/node/start.md) + * [End](guides/workflow/node/end.md) + * [Answer](guides/workflow/node/answer.md) + * [LLM](guides/workflow/node/llm.md) + * [Knowledge Retrieval](guides/workflow/node/knowledge-retrieval.md) + * [Question Classifier](guides/workflow/node/question-classifier.md) + * [Conditional Branch IF/ELSE](guides/workflow/node/ifelse.md) + * [Code Execution](guides/workflow/node/code.md) + * [Template](guides/workflow/node/template.md) + * [Doc Extractor](guides/workflow/node/doc-extractor.md) + * [List Operator](guides/workflow/node/list-operator.md) + * [Variable Aggregator](guides/workflow/node/variable-aggregator.md) + * [Variable Assigner](guides/workflow/node/variable-assigner.md) + * [Iteration](guides/workflow/node/iteration.md) + * [Parameter Extraction](guides/workflow/node/parameter-extractor.md) + * [HTTP Request](guides/workflow/node/http-request.md) + * [Agent](guides/workflow/node/agent.md) + * [Tools](guides/workflow/node/tools.md) + * [Loop](guides/workflow/node/loop.md) + * [Shortcut Key](guides/workflow/shortcut-key.md) + * [Orchestrate Node](guides/workflow/orchestrate-node.md) + * [File Upload](guides/workflow/file-upload.md) + * [Error Handling](guides/workflow/error-handling/README.md) + * [Predefined Error Handling Logic](guides/workflow/error-handling/predefined-error-handling-logic.md) + * [Error Type](guides/workflow/error-handling/error-type.md) + * [Additional Features](guides/workflow/additional-features.md) + * [Debug and Preview](guides/workflow/debug-and-preview/README.md) + * [Preview and Run](guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md) + * [Step Run](guides/workflow/debug-and-preview/step-run.md) + * [Conversation/Run Logs](guides/workflow/debug-and-preview/log.md) + * [Checklist](guides/workflow/debug-and-preview/checklist.md) + * [Run History](guides/workflow/debug-and-preview/history.md) + * [Application Publishing](guides/workflow/publish.md) + * [Bulletin: Image Upload Replaced by File Upload](guides/workflow/bulletin.md) +* [Knowledge](guides/knowledge-base/README.md) + * [Create Knowledge](guides/knowledge-base/create-knowledge-and-upload-documents.md) + * [1. Import Text Data](guides/knowledge-base/create-knowledge-and-upload-documents/1.-import-text-data/README.md) + * [1.1 Import Data from Notion](guides/knowledge-base/create-knowledge-and-upload-documents/1.-import-text-data/1.1-import-data-from-notion.md) + * [1.2 Import Data from Website](guides/knowledge-base/sync-from-website.md) + * [2. Choose a Chunk Mode](guides/knowledge-base/create-knowledge-and-upload-documents/2.-choose-a-chunk-mode.md) + * [3. Select the Indexing Method and Retrieval Setting](guides/knowledge-base/create-knowledge-and-upload-documents/3.-select-the-indexing-method-and-retrieval-setting.md) + * [Manage Knowledge](guides/knowledge-base/knowledge-and-documents-maintenance.md) + * [Maintain Documents](guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.md) + * [Maintain Knowledge via API](guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md) + * [Metadata](guides/knowledge-base/metadata.md) + * [Integrate Knowledge Base within Application](guides/knowledge-base/integrate-knowledge-within-application.md) + * [Retrieval Test / Citation and Attributions](guides/knowledge-base/retrieval-test-and-citation.md) + * [Knowledge Request Rate Limit](guides/knowledge-base/knowledge-request-rate-limit.md) + * [Connect to an External Knowledge Base](guides/knowledge-base/connect-external-knowledge.md) + * [External Knowledge API](guides/knowledge-base/external-knowledge-api-documentation.md) +* [Tools](guides/tools/README.md) + * [Quick Tool Integration](guides/tools/quick-tool-integration.md) + * [Advanced Tool Integration](guides/tools/advanced-tool-integration.md) + * [Tool Configuration](guides/tools/tool-configuration/README.md) + * [Google](guides/tools/tool-configuration/google.md) + * [Bing](guides/tools/tool-configuration/bing.md) + * [SearchApi](guides/tools/tool-configuration/searchapi.md) + * [StableDiffusion](guides/tools/tool-configuration/stable-diffusion.md) + * [Dall-e](guides/tools/tool-configuration/dall-e.md) + * [Perplexity Search](guides/tools/tool-configuration/perplexity.md) + * [AlphaVantage](guides/tools/tool-configuration/alphavantage.md) + * [Youtube](guides/tools/tool-configuration/youtube.md) + * [SearXNG](guides/tools/tool-configuration/searxng.md) + * [Serper](guides/tools/tool-configuration/serper.md) + * [SiliconFlow (Flux AI Supported)](guides/tools/tool-configuration/siliconflow.md) + * [ComfyUI](guides/tools/tool-configuration/comfyui.md) +* [Publishing](guides/application-publishing/README.md) + * [Publish as a Single-page Web App](guides/application-publishing/launch-your-webapp-quickly/README.md) + * [Web App Settings](guides/application-publishing/launch-your-webapp-quickly/web-app-settings.md) + * [Text Generator Application](guides/application-publishing/launch-your-webapp-quickly/text-generator.md) + * [Conversation Application](guides/application-publishing/launch-your-webapp-quickly/conversation-application.md) + * [Embedding In Websites](guides/application-publishing/embedding-in-websites.md) + * [Developing with APIs](guides/application-publishing/developing-with-apis.md) + * [Re-develop Based on Frontend Templates](guides/application-publishing/based-on-frontend-templates.md) +* [Annotation](guides/annotation/README.md) + * [Logs and Annotation](guides/annotation/logs.md) + * [Annotation Reply](guides/annotation/annotation-reply.md) +* [Monitoring](guides/monitoring/README.md) + * [Data Analysis](guides/monitoring/analysis.md) + * [Integrate External Ops Tools](guides/monitoring/integrate-external-ops-tools/README.md) + * [Integrate LangSmith](guides/monitoring/integrate-external-ops-tools/integrate-langsmith.md) + * [Integrate Langfuse](guides/monitoring/integrate-external-ops-tools/integrate-langfuse.md) + * [Integrate Opik](guides/monitoring/integrate-external-ops-tools/integrate-opik.md) +* [Extension](guides/extension/README.md) + * [API-Based Extension](guides/extension/api-based-extension/README.md) + * [External Data Tool](guides/extension/api-based-extension/external-data-tool.md) + * [Deploy API Tools with Cloudflare Workers](guides/extension/api-based-extension/cloudflare-workers.md) + * [Moderation](guides/extension/api-based-extension/moderation.md) + * [Code-Based Extension](guides/extension/code-based-extension/README.md) + * [External Data Tool](guides/extension/code-based-extension/external-data-tool.md) + * [Moderation](guides/extension/code-based-extension/moderation.md) +* [Collaboration](guides/workspace/README.md) + * [Discover](guides/workspace/app.md) + * [Invite and Manage Members](guides/workspace/invite-and-manage-members.md) +* [Management](guides/management/README.md) + * [App Management](guides/management/app-management.md) + * [Team Members Management](guides/management/team-members-management.md) + * [Personal Account Management](guides/management/personal-account-management.md) + * [Subscription Management](guides/management/subscription-management.md) + * [Version Control](guides/management/version-control.md) + +## Workshop + +* [Basic](workshop/basic/README.md) + * [How to Build an AI Image Generation App](workshop/basic/build-ai-image-generation-app.md) +* [Intermediate](workshop/intermediate/README.md) + * [Build An Article Reader Using File Upload](workshop/intermediate/article-reader.md) + * [Building a Smart Customer Service Bot Using a Knowledge Base](workshop/intermediate/customer-service-bot.md) + * [Generating analysis of Twitter account using Chatflow Agent](workshop/intermediate/twitter-chatflow.md) + +## Community + +* [Seek Support](community/support.md) +* [Become a Contributor](community/contribution.md) +* [Contributing to Dify Documentation](community/docs-contribution.md) + +## Plugins + +* [Introduction](plugins/introduction.md) +* [Quick Start](plugins/quick-start/README.md) + * [Install and Use Plugins](plugins/quick-start/install-plugins.md) + * [Develop Plugins](plugins/quick-start/develop-plugins/README.md) + * [Initialize Development Tools](plugins/quick-start/develop-plugins/initialize-development-tools.md) + * [Tool Plugin](plugins/quick-start/develop-plugins/tool-plugin.md) + * [Model Plugin](plugins/quick-start/develop-plugins/model-plugin/README.md) + * [Create Model Providers](plugins/quick-start/develop-plugins/model-plugin/create-model-providers.md) + * [Integrate the Predefined Model](plugins/quick-start/develop-plugins/model-plugin/predefined-model.md) + * [Integrate the Customizable Model](plugins/quick-start/develop-plugins/model-plugin/customizable-model.md) + * [Agent Strategy Plugin](plugins/quick-start/develop-plugins/agent-strategy-plugin.md) + * [Extension Plugin](plugins/quick-start/develop-plugins/extension-plugin.md) + * [Bundle](plugins/quick-start/develop-plugins/bundle.md) + * [Debug Plugin](plugins/quick-start/debug-plugin.md) +* [Manage Plugins](plugins/manage-plugins.md) +* [Schema Specification](plugins/schema-definition/README.md) + * [Manifest](plugins/schema-definition/manifest.md) + * [Endpoint](plugins/schema-definition/endpoint.md) + * [Tool](plugins/schema-definition/tool.md) + * [Agent](plugins/schema-definition/agent.md) + * [Model](plugins/schema-definition/model/README.md) + * [Model Designing Rules](plugins/schema-definition/model/model-designing-rules.md) + * [Model Schema](plugins/schema-definition/model/model-schema.md) + * [General Specifications](plugins/schema-definition/general-specifications.md) + * [Persistent Storage](plugins/schema-definition/persistent-storage.md) + * [Reverse Invocation of the Dify Service](plugins/schema-definition/reverse-invocation-of-the-dify-service/README.md) + * [App](plugins/schema-definition/reverse-invocation-of-the-dify-service/app.md) + * [Model](plugins/schema-definition/reverse-invocation-of-the-dify-service/model.md) + * [Tool](plugins/schema-definition/reverse-invocation-of-the-dify-service/tool.md) + * [Node](plugins/schema-definition/reverse-invocation-of-the-dify-service/node.md) +* [Best Practice](plugins/best-practice/README.md) + * [Develop a Slack Bot Plugin](plugins/best-practice/develop-a-slack-bot-plugin.md) +* [Publish Plugins](plugins/publish-plugins/README.md) + * [Publish to Dify Marketplace](plugins/publish-plugins/publish-to-dify-marketplace/README.md) + * [Plugin Developer Guidelines](plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines.md) + * [Plugin Privacy Protection Guidelines](plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.md) + * [Publish to Your Personal GitHub Repository](plugins/publish-plugins/publish-plugin-on-personal-github-repo.md) + * [Package the Plugin File and Publish it](plugins/publish-plugins/package-plugin-file-and-publish.md) +* [FAQ](plugins/faq.md) + +## Development + +* [Backend](development/backend/README.md) + * [DifySandbox](development/backend/sandbox/README.md) + * [Contribution Guide](development/backend/sandbox/contribution.md) +* [Models Integration](development/models-integration/README.md) + * [Integrate Open Source Models from Hugging Face](development/models-integration/hugging-face.md) + * [Integrate Open Source Models from Replicate](development/models-integration/replicate.md) + * [Integrate Local Models Deployed by Xinference](development/models-integration/xinference.md) + * [Integrate Local Models Deployed by OpenLLM](development/models-integration/openllm.md) + * [Integrate Local Models Deployed by LocalAI](development/models-integration/localai.md) + * [Integrate Local Models Deployed by Ollama](development/models-integration/ollama.md) + * [Integrate Models on LiteLLM Proxy](development/models-integration/litellm.md) + * [Integrating with GPUStack for Local Model Deployment](development/models-integration/gpustack.md) + * [Integrating AWS Bedrock Models (DeepSeek)](development/models-integration/aws-bedrock-deepseek.md) +* [Migration](development/migration/README.md) + * [Migrating Community Edition to v1.0.0](development/migration/migrate-to-v1.md) + + +## Learn More + +* [Use Cases](learn-more/use-cases/README.md) + * [DeepSeek & Dify Integration Guide: Building AI Applications with Multi-Turn Reasoning](learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.md) + * [Private Deployment of Ollama + DeepSeek + Dify: Build Your Own AI Assistant](learn-more/use-cases/private-ai-ollama-deepseek-dify.md) + * [Build a Notion AI Assistant](learn-more/use-cases/build-an-notion-ai-assistant.md) + * [Create a MidJourney Prompt Bot with Dify](learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.md) + * [Create an AI Chatbot with Business Data in Minutes](learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md) + * [Integrating Dify Chatbot into Your Wix Website](learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.md) + * [How to connect with AWS Bedrock Knowledge Base?](learn-more/use-cases/how-to-connect-aws-bedrock.md) + * [Building the Dify Scheduler](learn-more/use-cases/dify-schedule.md) + * [Building an AI Thesis Slack Bot on Dify](learn-more/use-cases/building-an-ai-thesis-slack-bot.md) +* [Extended Reading](learn-more/extended-reading/README.md) + * [What is LLMOps?](learn-more/extended-reading/what-is-llmops.md) + * [Retrieval-Augmented Generation (RAG)](learn-more/extended-reading/retrieval-augment/README.md) + * [Hybrid Search](learn-more/extended-reading/retrieval-augment/hybrid-search.md) + * [Re-ranking](learn-more/extended-reading/retrieval-augment/rerank.md) + * [Retrieval Modes](learn-more/extended-reading/retrieval-augment/retrieval.md) + * [How to Use JSON Schema Output in Dify?](learn-more/extended-reading/how-to-use-json-schema-in-dify.md) +* [FAQ](learn-more/faq/README.md) + * [Self-Host](learn-more/faq/install-faq.md) + * [LLM Configuration and Usage](learn-more/faq/use-llms-faq.md) + * [Plugins](learn-more/faq/plugins.md) + +## Policies + +* [Open Source License](policies/open-source.md) +* [User Agreement](policies/agreement/README.md) + * [Terms of Service](https://dify.ai/terms) + * [Privacy Policy](https://dify.ai/privacy) + * [Get Compliance Report](policies/agreement/get-compliance-report.md) + +## Features + +* [Workflow](features/workflow.md) diff --git a/en/community/contribution.mdx b/en/community/contribution.mdx new file mode 100644 index 00000000..9bfa938a --- /dev/null +++ b/en/community/contribution.mdx @@ -0,0 +1,161 @@ +--- +title: Contributing +--- + + +So you're looking to contribute to Dify - that's awesome, we can't wait to see what you do. As a startup with limited headcount and funding, we have grand ambitions to design the most intuitive workflow for building and managing LLM applications. Any help from the community counts, truly. + +We need to be nimble and ship fast given where we are, but we also want to make sure that contributors like you get as smooth an experience at contributing as possible. We've assembled this contribution guide for that purpose, aiming at getting you familiarized with the codebase & how we work with contributors, so you could quickly jump to the fun part. + +This guide, like Dify itself, is a constant work in progress. We highly appreciate your understanding if at times it lags behind the actual project, and welcome any feedback for us to improve. + +In terms of licensing, please take a minute to read our short [License and Contributor Agreement](https://github.com/langgenius/dify/blob/main/LICENSE). The community also adheres to the [code of conduct](https://github.com/langgenius/.github/blob/main/CODE\_OF\_CONDUCT.md). + +### Before you jump in + +[Find](https://github.com/langgenius/dify/issues?q=is:issue+is:closed) an existing issue, or [open](https://github.com/langgenius/dify/issues/new/choose) a new one. We categorize issues into 2 types: + +#### Feature requests: + +* If you're opening a new feature request, we'd like you to explain what the proposed feature achieves, and include as much context as possible. [@perzeusss](https://github.com/perzeuss) has made a solid [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) that helps you draft out your needs. Feel free to give it a try. +* If you want to pick one up from the existing issues, simply drop a comment below it saying so. + + A team member working in the related direction will be looped in. If all looks good, they will give the go-ahead for you to start coding. We ask that you hold off working on the feature until then, so none of your work goes to waste should we propose changes. + + Depending on whichever area the proposed feature falls under, you might talk to different team members. Here's rundown of the areas each our team members are working on at the moment: + + | Member | Scope | + | --------------------------------------------------------------------------------------- | ---------------------------------------------------- | + | [@yeuoly](https://github.com/Yeuoly) | Architecting Agents | + | [@jyong](https://github.com/JohnJyong) | RAG pipeline design | + | [@GarfieldDai](https://github.com/GarfieldDai) | Building workflow orchestrations | + | [@iamjoel](https://github.com/iamjoel) & [@zxhlyh](https://github.com/zxhlyh) | Making our frontend a breeze to use | + | [@guchenhe](https://github.com/guchenhe) & [@crazywoola](https://github.com/crazywoola) | Developer experience, points of contact for anything | + | [@takatost](https://github.com/takatost) | Overall product direction and architecture | + + How we prioritize: + + | Feature Type | Priority | + | ------------------------------------------------------------ | --------------- | + | High-Priority Features as being labeled by a team member | High Priority | + | Popular feature requests from our [community feedback board](https://github.com/langgenius/dify/discussions/categories/ideas) | Medium Priority | + | Non-core features and minor enhancements | Low Priority | + | Valuable but not immediate | Future-Feature | + +#### Anything else (e.g. bug report, performance optimization, typo correction): + +* Start coding right away. + + How we prioritize: + + | Issue Type | Priority | + | ----------------------------------------------------------------------------------- | --------------- | + | Bugs in core functions (cannot login, applications not working, security loopholes) | Critical | + | Non-critical bugs, performance boosts | Medium Priority | + | Minor fixes (typos, confusing but working UI) | Low Priority | + +### Installing + +Here are the steps to set up Dify for development: + +#### 1. Fork this repository + +#### 2. Clone the repo + +Clone the forked repository from your terminal: + +``` +git clone git@github.com:/dify.git +``` + +#### 3. Verify dependencies + +Dify requires the following dependencies to build, make sure they're installed on your system: + +* [Docker](https://www.docker.com/) +* [Docker Compose](https://docs.docker.com/compose/install/) +* [Node.js v18.x (LTS)](http://nodejs.org) +* [npm](https://www.npmjs.com/) version 8.x.x or [Yarn](https://yarnpkg.com/) +* [Python](https://www.python.org/) version 3.10.x + +#### 4. Installations + +Dify is composed of a backend and a frontend. Navigate to the backend directory by `cd api/`, then follow the [Backend README](https://github.com/langgenius/dify/blob/main/api/README.md) to install it. In a separate terminal, navigate to the frontend directory by `cd web/`, then follow the [Frontend README](https://github.com/langgenius/dify/blob/main/web/README.md) to install. + +Check the [installation FAQ](https://docs.dify.ai/learn-more/faq/install-faq) for a list of common issues and steps to troubleshoot. + +#### 5. Visit dify in your browser + +To validate your set up, head over to [http://localhost:3000](http://localhost:3000) (the default, or your self-configured URL and port) in your browser. You should now see Dify up and running. + +### Developing + +If you are adding a model provider,[this guide](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/README.md) is for you. + +If you are adding tools used in Agent Assistants and Workflows, [this guide](https://github.com/langgenius/dify/blob/main/api/core/tools/README.md) is for you. + +> **Note** : If you want to contribute to a new tool, please make sure you've left your contact information on the tool's 'YAML' file, and submitted a corresponding docs PR in the [Dify-docs](https://github.com/langgenius/dify-docs/tree/main/en/guides/tools/tool-configuration) repository. + +To help you quickly navigate where your contribution fits, a brief, annotated outline of Dify's backend & frontend is as follows: + +#### Backend + +Dify’s backend is written in Python using [Flask](https://flask.palletsprojects.com/en/3.0.x/). It uses [SQLAlchemy](https://www.sqlalchemy.org/) for ORM and [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html) for task queueing. Authorization logic goes via Flask-login. + +``` +[api/] +├── constants // Constant settings used throughout code base. +├── controllers // API route definitions and request handling logic. +├── core // Core application orchestration, model integrations, and tools. +├── docker // Docker & containerization related configurations. +├── events // Event handling and processing +├── extensions // Extensions with 3rd party frameworks/platforms. +├── fields // field definitions for serialization/marshalling. +├── libs // Reusable libraries and helpers. +├── migrations // Scripts for database migration. +├── models // Database models & schema definitions. +├── services // Specifies business logic. +├── storage // Private key storage. +├── tasks // Handling of async tasks and background jobs. +└── tests +``` + +#### Frontend + +The website is bootstrapped on [Next.js](https://nextjs.org/) boilerplate in Typescript and uses [Tailwind CSS](https://tailwindcss.com/) for styling. [React-i18next](https://react.i18next.com/) is used for internationalization. + +``` +[web/] +├── app // layouts, pages, and components +│ ├── (commonLayout) // common layout used throughout the app +│ ├── (shareLayout) // layouts specifically shared across token-specific sessions +│ ├── activate // activate page +│ ├── components // shared by pages and layouts +│ ├── install // install page +│ ├── signin // signin page +│ └── styles // globally shared styles +├── assets // Static assets +├── bin // scripts ran at build step +├── config // adjustable settings and options +├── context // shared contexts used by different portions of the app +├── dictionaries // Language-specific translate files +├── docker // container configurations +├── hooks // Reusable hooks +├── i18n // Internationalization configuration +├── models // describes data models & shapes of API responses +├── public // meta assets like favicon +├── service // specifies shapes of API actions +├── test +├── types // descriptions of function params and return values +└── utils // Shared utility functions +``` + +### Submitting your PR + +At last, time to open a pull request (PR) to our repo. For major features, we first merge them into the `deploy/dev` branch for testing, before they go into the `main` branch. If you run into issues like merge conflicts or don't know how to open a pull request, check out [GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests). + +And that's it! Once your PR is merged, you will be featured as a contributor in our [README](https://github.com/langgenius/dify/blob/main/README.md). + +### Getting Help + +If you ever get stuck or got a burning question while contributing, simply shoot your queries our way via the related GitHub issue, or hop onto our [Discord](https://discord.com/invite/8Tpq4AcN9c) for a quick chat. diff --git a/en/community/docs-contribution.mdx b/en/community/docs-contribution.mdx new file mode 100644 index 00000000..5e14f0f9 --- /dev/null +++ b/en/community/docs-contribution.mdx @@ -0,0 +1,79 @@ +--- +title: Contributing to Dify Documentation +--- + + +Dify documentation is an [open-source project](https://github.com/langgenius/dify-docs), and we welcome contributions. Whether you've spotted an issue while reading the docs or you're keen to contribute your own content, we encourage you to submit an issue or initiate a pull request on GitHub. We'll address your PR promptly. + +## How to Contribute + +We categorize documentation issues into two main types: + +* Content Corrections +* Content Additions +* Best Practices + +### Content Corrections + +If you encounter errors while reading a document or wish to suggest modifications, please use the **"Edit on GitHub"** button located in the table of contents on the right side of the document page. Utilize GitHub's built-in online editor to make your changes, then submit a pull request with a concise description of your edits. Please format your pull request title as `Fix: Update xxx`. We'll review your submission and merge the changes if everything looks good. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/community/29a6fe7b317ddb667cb3a58a8fdc4c56.png) + +Alternatively, you can post the document link on our [Issues page](https://github.com/langgenius/dify-docs/issues) with a brief description of the necessary modifications. We'll address these promptly upon receipt. + +### Content Additions + +To contribute new documentation to our repository, please follow these steps: + +1. Fork the repository + +Fork the repository to your GitHub account, then clone the repository to your local: + +```bash +git clone https://github.com//dify-docs.git +``` + +> Note: You can also use GitHub's online code editor to submit new Markdown files directly in the appropriate directory. + +2. Locate the relevant document directory and add your file + +For instance, if you're contributing documentation for third-party tools, please add new `.md` files to the `/guides/tools/tool-configuration/` directory. + +3. Submit a pull request + +When submitting a pull request, please use the format `Docs: Add xxx` for the title and provide a brief description in the comment field. We'll review your submission and merge the changes if everything is in order. + +### Best Practices + +We warmly encourage you to share the creative application scenarios you have built with Dify! To help community members better understand and replicate your hands-on experience, we recommend structuring your content as follows: + + +```text +1. Introduction + - Application scenarios and problems addressed + - Key features and highlights + - Final results and demonstrations + +2. Project Principles / Process Overview + +3. Prerequisites (if any) + - Required resource list + - Tool and dependency requirements + +4. Implementation in the Dify Platform (Suggested Steps) + - Application creation and basic configurations + - Process-building guide + - Configuration details for key nodes + +5. FAQ +``` + +> For images and screenshots, please use online image hosting links in your documentation. + +We look forward to your valuable contributions and to fostering knowledge within the Dify community together! + +## Getting Help + +If you ever get stuck or got a burning question while contributing, simply shoot your queries our way via the related GitHub issue, or hop onto our [Discord](https://discord.com/invite/8Tpq4AcN9c) for a quick chat. + +We appreciate your efforts in improving Dify's documentation! diff --git a/en/community/support.mdx b/en/community/support.mdx new file mode 100644 index 00000000..487d1e21 --- /dev/null +++ b/en/community/support.mdx @@ -0,0 +1,22 @@ +--- +title: Seek Support +--- + + +If you still have questions or suggestions about using the product while reading this documentation, please try the following ways to seek support. Our team and community will do their best to help you. + +### Community Support + + +Please do not share your Dify account information or other sensitive information with the community. Our support staff will not ask for your account information. + + +* Submit an Issue on [GitHub](https://github.com/langgenius/dify) +* Join the [Discord community](https://discord.gg/8Tpq4AcN9c) +* Post your ideas or questions on [Reddit](https://www.reddit.com/r/difyai/) + +### Contact Us + +For matters other than product support. + +* Email [hello@dify.ai](mailto:hello@dify.ai) diff --git a/en/development/backend/README.mdx b/en/development/backend/README.mdx new file mode 100644 index 00000000..a8258c5c --- /dev/null +++ b/en/development/backend/README.mdx @@ -0,0 +1,5 @@ +--- +title: Backend Development +--- + + diff --git a/en/development/backend/sandbox/README.mdx b/en/development/backend/sandbox/README.mdx new file mode 100644 index 00000000..f057456a --- /dev/null +++ b/en/development/backend/sandbox/README.mdx @@ -0,0 +1,21 @@ +--- +title: DifySandbox +--- + + +### Introduction +`DifySandbox` is a lightweight, fast, and secure code execution environment that supports multiple programming languages, including Python and Node.js. It serves as the underlying execution environment for various components in Dify Workflow, such as the Code node, Template Transform node, LLM node, and the Code Interpreter in the Tool node. DifySandbox ensures system security while enabling Dify to execute user-provided code. + +### Features +- **Multi-language Support**: DifySandbox is built on Seccomp, a low-level security mechanism that enables support for multiple programming languages. Currently, it supports Python and Node.js. +- **System Security**: It implements a whitelist policy, allowing only specific system calls to prevent unexpected security breaches. +- **File System Isolation**: User code runs in an isolated file system environment. +- **Network Isolation**: + - **DockerCompose**: Utilizes a separate Sandbox network and proxy containers for network access, maintaining intranet system security while offering flexible proxy configuration options. + - **K8s**: Network isolation strategies can be directly configured using Egress policies. + +### Project Repository +You can access the [DifySandbox](https://github.com/langgenius/dify-sandbox) repository to obtain the project source code and follow the project documentation for deployment and usage instructions. + +### Contribution +Please refer to the [Contribution Guide](contribution.md) to learn how you can participate in the development of DifySandbox. diff --git a/en/development/backend/sandbox/contribution.mdx b/en/development/backend/sandbox/contribution.mdx new file mode 100644 index 00000000..f9ee6a51 --- /dev/null +++ b/en/development/backend/sandbox/contribution.mdx @@ -0,0 +1,52 @@ +--- +title: Contribution +--- + + +### Code Structure +The following code file structure outlines the organization of the project: +``` +[cmd/] +├── server // Server startup entry point +├── lib // Shared library entry point +└── test // Common test scripts +[build/] // Build scripts for different architectures and platforms +[internal/] // Internal packages +├── controller // HTTP request handlers +├── middleware // Request processing middleware +├── server // Server setup and configuration +├── service // Controller services +├── static // Configuration files +│ ├── nodejs_syscall // Node.js system call whitelist +│ └── python_syscall // Python system call whitelist +├── types // Entity definitions +├── core // Core isolation and execution logic +│ ├── lib // Shared libraries +│ ├── runner // Code execution +│ │ ├── nodejs // Node.js executor +| | └── python // Python executor +└── tests // CI/CD tests +``` + +### Principle +The core functionality has two entry points: the `HTTP` service entry for `DifySandbox` and the `dynamic link library` entry. When the Sandbox runs code, it first generates a temporary code file. This file begins by calling the `dynamic link library` to initialize the runtime environment (the `Sandbox`). The user's code is then executed within this temporary file, ensuring that the system remains protected from potentially harmful user-submitted code. + +The dynamic link library uses `Seccomp` to restrict system calls. The `static` directory contains `nodejs_syscall` and `python_syscall` files, which provide system call whitelists for both `ARM64` and `AMD64` architectures. There are four files in total. Please do not modify these files unless absolutely necessary. + +### How to Contribute +For minor issues like `Typos` and `Bugs`, feel free to submit a `Pull Request`. For major changes or `Feature`-level submissions, please open an `Issue` first to facilitate discussion. + +#### To-Do List +Here are some items we're currently considering. If you're interested, you can choose one to contribute: +- [ ] Support for additional programming languages: + - We currently support `Python` and `Node.js`. Consider adding support for new languages. + - Remember to account for both `ARM64` and `AMD64` architectures, and provide `CI` testing to ensure security for any new language. +- [ ] Node.js dependency management: + - We've implemented support for `Python` dependencies, which can be automatically installed during Sandbox initialization. However, due to the complexity of `node_modules`, we haven't yet found a good solution for `Node.js`. This is an area open for improvement. +- [ ] Image processing capabilities: + - As multimodality becomes increasingly important, supporting image processing in the `Sandbox` would be valuable. + - Consider adding support for image processing libraries like `Pillow`, and enable passing images into the `Sandbox` for processing in `Dify`. +- [ ] Enhanced `CI` testing: + - Our current `CI` testing is limited and includes only basic test cases. More comprehensive testing would be beneficial. +- [ ] Multimodal data generation: + - Explore using the `Sandbox` to generate multimodal data, such as combining text and images. \ No newline at end of file diff --git a/en/development/migration/README.mdx b/en/development/migration/README.mdx new file mode 100644 index 00000000..7bfd1b8e --- /dev/null +++ b/en/development/migration/README.mdx @@ -0,0 +1,5 @@ +--- +title: Migration +--- + + diff --git a/en/development/migration/migrate-to-v1.mdx b/en/development/migration/migrate-to-v1.mdx new file mode 100644 index 00000000..e503c542 --- /dev/null +++ b/en/development/migration/migrate-to-v1.mdx @@ -0,0 +1,97 @@ +--- +title: Upgrading Community Edition to v1.0.0 +--- + + +> This document primarily explains how to upgrade from an older Community Edition version to [v1.0.0](https://github.com/langgenius/dify/releases/tag/1.0.0). If you have not installed the Dify Community Edition yet, you can directly clone the [Dify project](https://github.com/langgenius/dify) and switch to the `1.0.0` branch. For installation commands, refer to the [documentation](https://docs.dify.ai/zh-hans/getting-started/install-self-hosted/docker-compose). + +To experience the plugin functionality in the Community Edition, you need to upgrade to version`v1.0.0`. This document will guide you through the steps of upgrading from older versions to `v1.0.0` to access the plugin ecosystem features. + +## Start the Upgrade + +The upgrade process involves the following steps: + +1. Backup your data +2. Migrate plugins +3. Upgrade the main dify project + +### 1. Backup Data + +1.1 Execute the `cd` command to navigate to your Dify project directory and create a backup branch. + +1.2 Run the following command to back up your docker-compose YAML file (optional). + +```bash +cd docker +cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak +``` + +1.3 Run the command to stop docker services, then execute the backup data command in the Docker directory. + +```bash +docker compose down +tar -cvf volumes-$(date +%s).tgz volumes +``` + +### 2. Upgrade the Version + +`v1.0.0` supports deployment via Docker Compose. Navigate to your Dify project path and run the following commands to upgrade to the Dify version: + +```bash +git fetch origin +git checkout 1.0.0 # Switch to the 1.0.0 branch +cd docker +nano .env # Modify the environment configuration file to synchronizing .env.example file +docker compose -f docker-compose.yaml up -d +``` + +### 3. Migrate Tools to Plugins + +The purpose of this step is to automatically migrate the tools and model vendors previously used in the Community Edition and install them into the new plugin environment. + +1. Run the docker ps command to check the docker-api container ID. + +Example: + +```bash +docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +417241cd**** nginx:latest "sh -c 'cp /docker-e…" 3 hours ago Up 3 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp docker-nginx-1 +f84aa773**** langgenius/dify-api:1.0.0 "/bin/bash /entrypoi…" 3 hours ago Up 3 hours 5001/tcp docker-worker-1 +a3cb19c2**** langgenius/dify-api:1.0.0 "/bin/bash /entrypoi…" 3 hours ago Up 3 hours 5001/tcp docker-api-1 +``` + +Run the command `docker exec -it a3cb19c2**** bash` to enter the container terminal, and then run: + +```bash +poetry run flask extract-plugins --workers=20 +``` + +> If an error occurs, it is recommended to first install the `poetry` environment on the server as per the prerequisites. If the terminal asks for input after running the command, press **“Enter”** to skip the input. + +This command will extract all models and tools currently in use in the environment. The workers parameter controls the number of parallel processes used during extraction and can be adjusted as needed. After the command runs, it will generate a `plugins.jsonl` file containing plugin information for all workspaces in the current Dify instance. + +Ensure your network can access the public internet and support access to: `https://marketplace.dify.ai`. Continue running the following command in the `docker-api-1` container: + +```bash +poetry run flask install-plugins --workers=2 +``` + +This command will download and install all necessary plugins into the latest Community Edition. + +Finally, migrate the plugin data. Run the following command to update the `provider name` by appending `langgenius/{provider_name}/{provider_name}` to it. + +```bash +poetry run flask migrate-data-for-plugin +``` + +The migration is complete when you see the results in your terminal. + +```bash +Migrate [tool_builtin_providers] data for plugin completed, total: 6 +Migrate data for plugin completed. +``` + +## Verify the Migration + +Access the Dify platform and click the **“Plugins”** button in the upper-right corner to check if the previously used tools have been correctly installed. Randomly use one of the plugins to verify if it works properly. If the plugins work well, the version upgrade and data migration have been successfully completed. diff --git a/en/development/models-integration/README.mdx b/en/development/models-integration/README.mdx new file mode 100644 index 00000000..b6f034e1 --- /dev/null +++ b/en/development/models-integration/README.mdx @@ -0,0 +1,5 @@ +--- +title: Models Integration +--- + + diff --git a/en/development/models-integration/aws-bedrock-deepseek.mdx b/en/development/models-integration/aws-bedrock-deepseek.mdx new file mode 100644 index 00000000..7ebbd8b4 --- /dev/null +++ b/en/development/models-integration/aws-bedrock-deepseek.mdx @@ -0,0 +1,78 @@ +--- +title: Integrate Models from AWS Bedrock +--- + + +## Overview + +The [AWS Bedrock Marketplace](https://aws.amazon.com/bedrock/marketplace/) is a comprehensive platform for deploying large language models (LLMs). It allows developers to discover, test, and deploy over 100 emerging foundation models (FMs) seamlessly. + +This guide will take the deployment of DeepSeek models as an example to demonstrate how to deploy model on the Bedrock Marketplace platform and integrate it into the Dify platform, helping you quickly build AI applications based on DeepSeek models. + +## Prerequisites + +- An AWS account with access to [Bedrock](https://aws.amazon.com/bedrock/). +- A [Dify.AI account](https://cloud.dify.ai/). + +## Deployment Procedure + +### 1. Deploy the DeepSeek Model + +#### 1.1 Searching and Selecting the Model + +1. Navigate to the **Bedrock Marketplace** and search for **DeepSeek**. +2. Choose a **DeepSeek** model based on your requirements. + +![](https://assets-docs.dify.ai/2025/02/9c6e17fc0cf262b2005013bf122251d1.png) + +#### 1.2 Initiating Deployment + +1. Go to the **Model detail** page and click **Deploy**. +2. Follow the instructions to configure the deployment settings. + +> **Note:** Model versions require different compute configurations, affecting costs. + +![](https://assets-docs.dify.ai/2025/02/613497e3473d9b6eaa7cb5611decee0c.png) + +#### 1.3 Retrieving the Endpoint + +Once deployment is complete, navigate to the **Marketplace Deployments** page to find the auto-generated **Endpoint**. This endpoint is equivalent to a **SageMaker endpoint** and will be used for connecting to the Dify platform. + +![View Endpoint](https://assets-docs.dify.ai/2025/02/82a1d6406662b83386b86ec511ab20be.png) + +### 2. Connecting DeepSeek to the Dify Platform + +#### 2.1 Accessing Configuration Settings + +1. Log in to the Dify management panel and go to the **Settings** page. + +2. On the **Model Provider** page, select **Amazon SageMaker**. + +![Add Model](https://assets-docs.dify.ai/2025/02/864fc8476c47b460b67f14152cbbf360.png) + +#### 2.2 Configuring SageMaker Settings + +Click **Add Model** and fill in the following information: + + * **Model Type:** Select **LLM** as the model type + * **Model Name:** Provide a custom name for your model + * **SageMaker Endpoint:** Enter the endpoint retrieved from the Bedrock Marketplace + +![](https://assets-docs.dify.ai/2025/02/1feaa8d5054933f42da25a8f655b5a9e.png) + +### 3. Testing the Model + +1. Open Dify and select Create a Blank App. +2. Select either Chatflow or Workflow. +3. Add an LLM node. +4. Verify model responses (see screenshot below for expected responses). + +![Model Running](https://assets-docs.dify.ai/2025/02/e7fb06888101662ecb970401fdba63b5.png) + +> **Note:** You can also create a **Chatbot** application for additional testing. + +## FAQ + +### 1. **Endpoint Parameter Not Visible After Deployment** + +Ensure that the compute instance is configured correctly and that AWS permissions are properly set. If the issue persists, consider redeploying the model or contacting AWS customer support. diff --git a/en/development/models-integration/gpustack.mdx b/en/development/models-integration/gpustack.mdx new file mode 100644 index 00000000..c6f1b973 --- /dev/null +++ b/en/development/models-integration/gpustack.mdx @@ -0,0 +1,68 @@ +--- +title: Integrating with GPUStack for Local Model Deployment +--- + + +[GPUStack](https://github.com/gpustack/gpustack) is an open-source GPU cluster manager for running large language models(LLMs). + +Dify allows integration with GPUStack for local deployment of large language model inference, embedding and reranking capabilities. + +## Deploying GPUStack + +You can refer to the official [Documentation](https://docs.gpustack.ai) for deployment, or quickly integrate following the steps below: + +### Linux or MacOS + +GPUStack provides a script to install it as a service on systemd or launchd based systems. To install GPUStack using this method, just run: + +```bash +curl -sfL https://get.gpustack.ai | sh -s - +``` + +### Windows + +Run PowerShell as administrator (**avoid** using PowerShell ISE), then run the following command to install GPUStack: + +```powershell +Invoke-Expression (Invoke-WebRequest -Uri "https://get.gpustack.ai" -UseBasicParsing).Content +``` + +Then you can follow the printed instructions to access the GPUStack UI. + +## Deploying LLM + +Using a LLM hosted on GPUStack as an example: + +1. In GPUStack UI, navigate to the "Models" page and click on "Deploy Model", choose `Hugging Face` from the dropdown. + +2. Use the search bar in the top left to search for the model name `Qwen/Qwen2.5-0.5B-Instruct-GGUF`. + +3. Click `Save` to deploy the model. + +![gpustack-deploy-llm](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/35f535a6bb3023aa69a3fdafbdb0c8f3.png) + +## Create an API Key + +1. Navigate to the "API Keys" page and click on "New API Key". + +2. Fill in the name, then click `Save`. + +3. Copy the API key and save it for later use. + +## Integrating GPUStack into Dify + +5. Go to `Settings > Model Providers > GPUStack` and fill in: + + - Model Type: `LLM` + + - Model Name: `qwen2.5-0.5b-instruct` + + - Server URL: `http://your-gpustack-server-ip` + + - API Key: `Input the API key you copied from previous steps` + + Click "Save" to use the model in the application. + +![add-gpustack-llm](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/8e8851fec5a1515a2284aad68b90ad40.png) + +For more information about GPUStack, please refer to [Github Repo](https://github.com/gpustack/gpustack). diff --git a/en/development/models-integration/hugging-face.mdx b/en/development/models-integration/hugging-face.mdx new file mode 100644 index 00000000..ae1c18a0 --- /dev/null +++ b/en/development/models-integration/hugging-face.mdx @@ -0,0 +1,72 @@ +--- +title: Integrate Open Source Models from Hugging Face +--- + + +Dify supports Text-Generation and Embeddings. Below are the corresponding Hugging Face model types: + +* Text-Generation:[text-generation](https://huggingface.co/models?pipeline\_tag=text-generation\&sort=trending),[text2text-generation](https://huggingface.co/models?pipeline\_tag=text2text-generation\&sort=trending) +* Embeddings:[feature-extraction](https://huggingface.co/models?pipeline\_tag=feature-extraction\&sort=trending) + +The specific steps are as follows: + +1. You need a Hugging Face account ([registered address](https://huggingface.co/join)). +2. Set the API key of Hugging Face ([obtain address](https://huggingface.co/settings/tokens)). +3. Select a model to enter the [Hugging Face model list page](https://huggingface.co/models?pipeline\_tag=text-generation\&sort=trending). + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/af8a771b1e71152837e0f25b87a4471e.png) + +Dify supports accessing models on Hugging Face in two ways: + +1. Hosted Inference API. This method uses the model officially deployed by Hugging Face. No fee is required. But the downside is that only a small number of models support this approach. +2. Inference Endpoint. This method uses resources such as AWS accessed by the Hugging Face to deploy the model and requires payment. + +### Models that access the Hosted Inference API + +#### 1 Select a model + +Hosted inference API is supported only when there is an area containing Hosted inference API on the right side of the model details page. As shown in the figure below: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/dc5a5584cef16fe76595058d37043546.png) + +On the model details page, you can get the name of the model. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/248a80fb0dac520e690cb122dbe91324.png) + +#### 2 Using access models in Dify + +Select Hosted Inference API for Endpoint Type in `Settings > Model Provider > Hugging Face > Model Type`. As shown below: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/425f762f4b4b88d7ff69f5e3e898e6d0.png) + +API Token is the API Key set at the beginning of the article. The model name is the model name obtained in the previous step. + +### Method 2: Inference Endpoint + +#### 1 Select the model to deploy + +Inference Endpoint is only supported for models with the Inference Endpoints option under the Deploy button on the right side of the model details page. As shown below: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/0821340197577ff126440b2558446890.png) + +#### 2 Deployment model + +Click the Deploy button for the model and select the Inference Endpoint option. If you have not bound a bank card before, you will need to bind the card. Just follow the process. After binding the card, the following interface will appear: modify the configuration according to the requirements, and click Create Endpoint in the lower left corner to create an Inference Endpoint. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/f7582adc0937dc4f038e462b578d9c17.png) + +After the model is deployed, you can see the Endpoint URL. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/30c38995813c0d05c35c42dc0f1d467c.png) + +#### 3 Using access models in Dify + +Select Inference Endpoints for Endpoint Type in `Settings > Model Provider > Hugging face > Model Type`. As shown below: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/736421b38ac731e58585487ca2040c8b.png) + +The API Token is the API Key set at the beginning of the article. `The name of the Text-Generation model can be arbitrary, but the name of the Embeddings model needs to be consistent with Hugging Face.` The Endpoint URL is the Endpoint URL obtained after the successful deployment of the model in the previous step. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/6a8908cace5b287070577bf555d69b0c.png) + +> Note: The "User name / Organization Name" for Embeddings needs to be filled in according to your deployment method on Hugging Face's [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/guides/access), with either the ''[User name](https://huggingface.co/settings/account)'' or the "[Organization Name](https://ui.endpoints.huggingface.co/)". diff --git a/en/development/models-integration/litellm.mdx b/en/development/models-integration/litellm.mdx new file mode 100644 index 00000000..2e7a7450 --- /dev/null +++ b/en/development/models-integration/litellm.mdx @@ -0,0 +1,83 @@ +--- +title: Integrate Models on LiteLLM Proxy +--- + + +[LiteLLM Proxy](https://github.com/BerriAI/litellm) is a proxy server that allows: + +* Calling 100+ LLMs (OpenAI, Azure, Vertex, Bedrock) in the OpenAI format +* Using Virtual Keys to set Budgets, Rate limits and track usage + +Dify supports integrating LLM and Text Embedding capabilities models available on LiteLLM Proxy + +## Quick Integration + +### Step 1. Start LiteLLM Proxy Server + +LiteLLM Requires a config with all your models defined - we will call this file `litellm_config.yaml` + +[Detailed docs on how to setup litellm config - here](https://docs.litellm.ai/docs/proxy/configs) + +```yaml +model_list: + - model_name: gpt-4 + litellm_params: + model: azure/chatgpt-v-2 + api_base: https://openai-gpt-4-test-v-1.openai.azure.com/ + api_version: "2023-05-15" + api_key: + - model_name: gpt-4 + litellm_params: + model: azure/gpt-4 + api_key: + api_base: https://openai-gpt-4-test-v-2.openai.azure.com/ + - model_name: gpt-4 + litellm_params: + model: azure/gpt-4 + api_key: + api_base: https://openai-gpt-4-test-v-2.openai.azure.com/ +``` + +### Step 2. Start LiteLLM Proxy + +```shell +docker run \ + -v $(pwd)/litellm_config.yaml:/app/config.yaml \ + -p 4000:4000 \ + ghcr.io/berriai/litellm:main-latest \ + --config /app/config.yaml --detailed_debug +``` + +On success, the proxy will start running on `http://localhost:4000` + +### Step 3. Integrate LiteLLM Proxy in Dify + +In `Settings > Model Providers > OpenAI-API-compatible`, fill in: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/c02feef6b054be16639ecd23ce10b605.png) + +* Model Name: `gpt-4` +* Base URL: `http://localhost:4000` + + Enter the base URL where the LiteLLM service is accessible. +* Model Type: `Chat` +* Model Context Length: `4096` + + The maximum context length of the model. If unsure, use the default value of 4096. +* Maximum Token Limit: `4096` + + The maximum number of tokens returned by the model. If there are no specific requirements for the model, this can be consistent with the model context length. +* Support for Vision: `Yes` + + Check this option if the model supports image understanding (multimodal), like `gpt4-o`. + +Click "Save" to use the model in the application after verifying that there are no errors. + +The integration method for Embedding models is similar to LLM, just change the model type to Text Embedding. + +## More Information + +For more information on LiteLLM, please refer to: + +* [LiteLLM](https://github.com/BerriAI/litellm) +* [LiteLLM Proxy Server](https://docs.litellm.ai/docs/simple\_proxy) diff --git a/en/development/models-integration/localai.mdx b/en/development/models-integration/localai.mdx new file mode 100644 index 00000000..55547a4a --- /dev/null +++ b/en/development/models-integration/localai.mdx @@ -0,0 +1,95 @@ +--- +title: Integrating with LocalAI for Local Model Deployment +--- + + +[LocalAI](https://github.com/go-skynet/LocalAI) is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU. + +Dify allows integration with LocalAI for local deployment of large language model inference and embedding capabilities. + +## Deploying LocalAI + +### Starting LocalAI + +You can refer to the official [Getting Started](https://localai.io/basics/getting_started/) guide for deployment, or quickly integrate following the steps below: + +(These steps are derived from [LocalAI Data query example](https://github.com/go-skynet/LocalAI/blob/master/examples/langchain-chroma/README.md)) + +1. First, clone the LocalAI code repository and navigate to the specified directory. + + ```bash + $ git clone https://github.com/go-skynet/LocalAI + $ cd LocalAI/examples/langchain-chroma + ``` + +2. Download example LLM and Embedding models. + + ```bash + $ wget https://huggingface.co/skeskinen/ggml/resolve/main/all-MiniLM-L6-v2/ggml-model-q4_0.bin -O models/bert + $ wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j + ``` + + Here, we choose two smaller models that are compatible across all platforms. `ggml-gpt4all-j` serves as the default LLM model, and `all-MiniLM-L6-v2` serves as the default Embedding model, for quick local deployment. + +3. Configure the .env file. + + ```shell + $ mv .env.example .env + ``` + + NOTE: Ensure that the THREADS variable value in `.env` doesn't exceed the number of CPU cores on your machine. + +4. Start LocalAI. + + ```shell + # start with docker-compose + $ docker-compose up -d --build + + # tail the logs & wait until the build completes + $ docker logs -f langchain-chroma-api-1 + 7:16AM INF Starting LocalAI using 4 threads, with models path: /models + 7:16AM INF LocalAI version: v1.24.1 (9cc8d9086580bd2a96f5c96a6b873242879c70bc) + ``` + + The LocalAI request API endpoint will be available at http://127.0.0.1:8080. + + And it provides two models, namely: + + - LLM Model: `ggml-gpt4all-j` + + External access name: `gpt-3.5-turbo` (This name is customizable and can be configured in `models/gpt-3.5-turbo.yaml`). + + - Embedding Model: `all-MiniLM-L6-v2` + + External access name: `text-embedding-ada-002` (This name is customizable and can be configured in `models/embeddings.yaml`). + > If you use the Dify Docker deployment method, you need to pay attention to the network configuration to ensure that the Dify container can access the endpoint of LocalAI. The Dify container cannot access localhost inside, and you need to use the host IP address. + +5. Integrate the models into Dify. + + Go to `Settings > Model Providers > LocalAI` and fill in: + + Model 1: `ggml-gpt4all-j` + + - Model Type: Text Generation + + - Model Name: `gpt-3.5-turbo` + + - Server URL: http://127.0.0.1:8080 + + If Dify is deployed via docker, fill in the host domain: `http://:8080`, which can be a LAN IP address, like: `http://192.168.1.100:8080` + + Click "Save" to use the model in the application. + + Model 2: `all-MiniLM-L6-v2` + + - Model Type: Embeddings + + - Model Name: `text-embedding-ada-002` + + - Server URL: http://127.0.0.1:8080 + + > If Dify is deployed via docker, fill in the host domain: `http://:8080`, which can be a LAN IP address, like: `http://192.168.1.100:8080` + + Click "Save" to use the model in the application. + +For more information about LocalAI, please refer to: https://github.com/go-skynet/LocalAI \ No newline at end of file diff --git a/en/development/models-integration/ollama.mdx b/en/development/models-integration/ollama.mdx new file mode 100644 index 00000000..0f8ddd8b --- /dev/null +++ b/en/development/models-integration/ollama.mdx @@ -0,0 +1,126 @@ +--- +title: Integrate Local Models Deployed by Ollama +--- + + +![ollama](<../../.gitbook/assets/ollama (1).png>) + +[Ollama](https://github.com/jmorganca/ollama) is a cross-platform inference framework client (MacOS, Windows, Linux) designed for seamless deployment of large language models (LLMs) such as Llama 2, Mistral, Llava, and more. With its one-click setup, Ollama enables local execution of LLMs, providing enhanced data privacy and security by keeping your data on your own machine. + +## Quick Integration + +### Download and Launch Ollama + +1. Download Ollama + + Visit [https://ollama.com/download](https://ollama.com/download) to download the Ollama client for your system. + +2. Run Ollama and Chat with Llama3.2 + + ```bash + ollama run llama3.2 + ``` + + After successful launch, Ollama starts an API service on local port 11434, which can be accessed at `http://localhost:11434`. + + For other models, visit [Ollama Models](https://ollama.com/library) for more details. + +3. Integrate Ollama in Dify + + In `Settings > Model Providers > Ollama`, fill in: + + ![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/06aa8dca873203e7bd8cc1866fa8f83b.png) + + * Model Name: `llama3.2` + * Base URL: `http://:11434` + + Enter the base URL where the Ollama service is accessible. If filling in a public URL still results in an error, please refer to the [FAQ](#faq) and modify environment variables to make Ollama service accessible from all IPs + + If Dify is deployed using Docker, consider using the local network IP address, e.g., `http://192.168.1.100:11434` or `http://host.docker.internal:11434` to access the service. + + For local source code deployment, use `http://localhost:11434`. + * Model Type: `Chat` + * Model Context Length: `4096` + + The maximum context length of the model. If unsure, use the default value of 4096. + * Maximum Token Limit: `4096` + + The maximum number of tokens returned by the model. If there are no specific requirements for the model, this can be consistent with the model context length. + * Support for Vision: `Yes` + + Check this option if the model supports image understanding (multimodal), like `llava`. + + Click "Save" to use the model in the application after verifying that there are no errors. + + The integration method for Embedding models is similar to LLM, just change the model type to Text Embedding. +1. Use Ollama Models + + ![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/f6003949d3fdf670fb4d61fab39d7947.png) + + Enter `Prompt Eng.` page of the App that needs to be configured, select the `llava` model under the Ollama provider, and use it after configuring the model parameters. + +## FAQ + +### ⚠️ If you are using docker to deploy Dify and Ollama, you may encounter the following error: + +```bash +httpconnectionpool(host=127.0.0.1, port=11434): max retries exceeded with url:/cpi/chat (Caused by NewConnectionError(': fail to establish a new connection:[Errno 111] Connection refused')) + +httpconnectionpool(host=localhost, port=11434): max retries exceeded with url:/cpi/chat (Caused by NewConnectionError(': fail to establish a new connection:[Errno 111] Connection refused')) +``` + +This error occurs because the Ollama service is not accessible from the docker container. `localhost` usually refers to the container itself, not the host machine or other containers. + +You need to expose the Ollama service to the network to resolve this issue. + +### Setting environment variables on Mac + +If Ollama is run as a macOS application, environment variables should be set using `launchctl`: + +1. For each environment variable, call `launchctl setenv`. + + ```bash + launchctl setenv OLLAMA_HOST "0.0.0.0" + ``` +2. Restart Ollama application. +3. If the above steps are ineffective, you can use the following method: + + The issue lies within Docker itself, and to access the Docker host.\ + you should connect to `host.docker.internal`. Therefore, replacing `localhost` with `host.docker.internal` in the service will make it work effectively. + + ```bash + http://host.docker.internal:11434 + ``` + +### Setting environment variables on Linux + +If Ollama is run as a systemd service, environment variables should be set using `systemctl`: + +1. Edit the systemd service by calling `systemctl edit ollama.service`. This will open an editor. +2. For each environment variable, add a line `Environment` under section `[Service]`: + + ```ini + [Service] + Environment="OLLAMA_HOST=0.0.0.0" + ``` +3. Save and exit. +4. Reload `systemd` and restart Ollama: + + ```bash + systemctl daemon-reload + systemctl restart ollama + ``` + +### Setting environment variables on Windows + +On windows, Ollama inherits your user and system environment variables. + +1. First Quit Ollama by clicking on it in the task bar +2. Edit system environment variables from the control panel +3. Edit or create New variable(s) for your user account for `OLLAMA_HOST`, `OLLAMA_MODELS`, etc. +4. Click OK/Apply to save +5. Run `ollama` from a new terminal window + +## How can I expose Ollama on my network? + +Ollama binds 127.0.0.1 port 11434 by default. Change the bind address with the `OLLAMA_HOST` environment variable. diff --git a/en/development/models-integration/openllm.mdx b/en/development/models-integration/openllm.mdx new file mode 100644 index 00000000..52dbbed9 --- /dev/null +++ b/en/development/models-integration/openllm.mdx @@ -0,0 +1,29 @@ +--- +title: Connecting to OpenLLM Local Deployed Models +--- + + +With [OpenLLM](https://github.com/bentoml/OpenLLM), you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps. +And Dify supports connecting to OpenLLM deployed large language model's inference capabilities locally. + +## Deploy OpenLLM Model +### Starting OpenLLM + +Each OpenLLM Server can deploy one model, and you can deploy it in the following way: + +```bash +docker run --rm -it -p 3333:3000 ghcr.io/bentoml/openllm start facebook/opt-1.3b --backend pt +``` + +> Note: Using the `facebook/opt-1.3b` model here is only for demonstration, and the effect may not be good. Please choose the appropriate model according to the actual situation. For more models, please refer to: [Supported Model List](https://github.com/bentoml/OpenLLM#-supported-models). + +After the model is deployed, use the connected model in Dify. + + Fill in under `Settings > Model Providers > OpenLLM`: + + - Model Name: `facebook/opt-1.3b` + - Server URL: `http://:3333` Replace with your machine IP address + + Click "Save" and the model can be used in the application. + +This instruction is only for quick connection as an example. For more features and information on using OpenLLM, please refer to: [OpenLLM](https://github.com/bentoml/OpenLLM) \ No newline at end of file diff --git a/en/development/models-integration/replicate.mdx b/en/development/models-integration/replicate.mdx new file mode 100644 index 00000000..5e6c81db --- /dev/null +++ b/en/development/models-integration/replicate.mdx @@ -0,0 +1,19 @@ +--- +title: Integrate Open Source Models from Replicate +--- + + +Dify supports accessing [Language models](https://replicate.com/collections/language-models) and [Embedding models](https://replicate.com/collections/embedding-models) on Replicate. Language models correspond to Dify's reasoning model, and Embedding models correspond to Dify's Embedding model. + +Specific steps are as follows: + +1. You need to have a Replicate account ([registered address](https://replicate.com/signin?next=/docs)). +2. Get API Key ([get address](https://replicate.com/signin?next=/docs)). +3. Pick a model. Select the model under [Language models](https://replicate.com/collections/language-models) and [Embedding models](https://replicate.com/collections/embedding-models) . +4. Add models in Dify's `Settings > Model Provider > Replicate`. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/b11aa84eb58e4457b47696f077389e37.png) + +The API key is the API Key set in step 2. Model Name and Model Version can be found on the model details page: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/95e2ad371c82ef7ef641192f2bf1a1f8.png) diff --git a/en/development/models-integration/xinference.mdx b/en/development/models-integration/xinference.mdx new file mode 100644 index 00000000..78949ee9 --- /dev/null +++ b/en/development/models-integration/xinference.mdx @@ -0,0 +1,57 @@ +--- +title: Integrate Local Models Deployed by Xinference +--- + + +[Xorbits inference](https://github.com/xorbitsai/inference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models, and can even be used on laptops. It supports various models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, etc. And Dify supports connecting to Xinference deployed large language model inference and embedding capabilities locally. + +## Deploy Xinference + +Please note that you usually do not need to manually find the IP address of the Docker container to access the service, because Docker offers a port mapping feature. This allows you to map the container ports to local machine ports, enabling access via your local address. For example, if you used the `-p 80:80` parameter when running the container, you can access the service inside the container by visiting `http://localhost:80` or `http://127.0.0.1:80`. + +If you do need to use the container's IP address directly, the steps above will assist you in obtaining this information. + +### Starting Xinference + +There are two ways to deploy Xinference, namely [local deployment](https://github.com/xorbitsai/inference/blob/main/README.md#local) and [distributed deployment](https://github.com/xorbitsai/inference/blob/main/README.md#distributed), here we take local deployment as an example. + +1. First, install Xinference via PyPI: + + ```bash + $ pip install "xinference[all]" + ``` +2. Start Xinference locally: + + ```bash + $ xinference-local + 2023-08-20 19:21:05,265 xinference 10148 INFO Xinference successfully started. Endpoint: http://127.0.0.1:9997 + 2023-08-20 19:21:05,266 xinference.core.supervisor 10148 INFO Worker 127.0.0.1:37822 has been added successfully + 2023-08-20 19:21:05,267 xinference.deploy.worker 10148 INFO Xinference worker successfully started. + ``` + + Xinference will start a worker locally by default, with the endpoint: `http://127.0.0.1:9997`, and the default port is `9997`. By default, access is limited to the local machine only, but it can be configured with `-H 0.0.0.0` to allow access from any non-local client. To modify the host or port, you can refer to xinference's help information: `xinference-local --help`. + + > If you use the Dify Docker deployment method, you need to pay attention to the network configuration to ensure that the Dify container can access the endpoint of Xinference. The Dify container cannot access localhost inside, and you need to use the host IP address. +3. Create and deploy the model + + Visit `http://127.0.0.1:9997`, select the model and specification you need to deploy, as shown below: + + ![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/development/models-integration/6945f8e7a0ae88d1f67f988d53d420bd.png) + + As different models have different compatibility on different hardware platforms, please refer to [Xinference built-in models](https://inference.readthedocs.io/en/latest/models/builtin/index.html) to ensure the created model supports the current hardware platform. +4. Obtain the model UID + + Copy model ID from `Running Models` page, such as: `2c886330-8849-11ee-9518-43b0b8f40bea` +5. After the model is deployed, connect the deployed model in Dify. + + In `Settings > Model Providers > Xinference`, enter: + + * Model name: `vicuna-v1.3` + * Server URL: `http://:9997` **Replace with your machine IP address** + * Model UID: `2c886330-8849-11ee-9518-43b0b8f40bea` + + Click "Save" to use the model in the dify application. + +Dify also supports using [Xinference builtin models](https://github.com/xorbitsai/inference/blob/main/README.md#builtin-models) as Embedding models, just select the Embeddings type in the configuration box. + +For more information about Xinference, please refer to: [Xorbits Inference](https://github.com/xorbitsai/inference) diff --git a/en/getting-started/dify-premium-on-aws.mdx b/en/getting-started/dify-premium.mdx similarity index 100% rename from en/getting-started/dify-premium-on-aws.mdx rename to en/getting-started/dify-premium.mdx diff --git a/en/getting-started/install-self-hosted/bt-panel.mdx b/en/getting-started/install-self-hosted/aa-panel.mdx similarity index 100% rename from en/getting-started/install-self-hosted/bt-panel.mdx rename to en/getting-started/install-self-hosted/aa-panel.mdx diff --git a/en/getting-started/install-self-hosted/docker-compose.mdx b/en/getting-started/install-self-hosted/docker-compose.mdx index ea0a8cd3..dbcc3c9e 100644 --- a/en/getting-started/install-self-hosted/docker-compose.mdx +++ b/en/getting-started/install-self-hosted/docker-compose.mdx @@ -59,51 +59,53 @@ git clone https://github.com/langgenius/dify.git --branch 0.15.3 ```bash cd dify/docker ``` -2. ` 2. Copy the environment configuration file - `h - CODE_BLOCK_PLACEHOLDER`bash ```bash - cd dify/docker - ```tart the Docker containers - - Choose the appropriate comma```bash cp .env.example .env - ```Docker Compose version on your system. You can use the `d to check the version, and refer to the [Docker documentation](https://docs.docker.com/compose/install/) for more information: + ``` +3. Start the Docker containers - * If you have Docker Compose V2, use the` command to check the version, and refer to the [Docker documentation](https://docs.docker.com/compose/install/) for more information: + Choose the appropriate command to start the containers based on the Docker Compose version on your system. You can use the `$ docker compose version` command to check the version, and refer to the [Docker documentation](https://docs.docker.com/compose/install/) for more information: * If you have Docker Compose V2, use the following command: - `utput similar to the following, showing the status ```bash - docker-compose up -d - ````bash -[+] Running 11/11 - ✔ Network docker_ssrf_proxy_network Created `bash + ```bash docker compose up -d - `etwork dCODE` + ``` * If you have Docker Compose V1, use the following command: - `ssrf_proxy / sandbox` . + ```bash + docker-compose up -d + ``` + +After executing the command, you should see output similar to the following, showing the status and port mappings of all containers: + +```bash +[+] Running 11/11 + ✔ Network docker_ssrf_proxy_network Created 0.1s + ✔ Network docker_default Created 0.0s + ✔ Container docker-redis-1 Started 2.4s + ✔ Container docker-ssrf_proxy-1 Started 2.8s + ✔ Container docker-sandbox-1 Started 2.7s + ✔ Container docker-web-1 Started 2.7s + ✔ Container docker-weaviate-1 Started 2.4s + ✔ Container docker-db-1 Started 2.7s + ✔ Container docker-api-1 Started 6.5s + ✔ Container docker-worker-1 Started 6.4s + ✔ Container docker-nginx-1 Started 7.1s +``` + +Finally, check if all containers are running successfully: ```bash -NAME `bash - ```bash - docker compose up -d - ```cuting the command, you should see output similar to the following, showing the status ```bash - docker-compose up -d - ````pi-1 langgenius/dify-api:0.6.13 "/bin/bash /entrypoi…" api About a minute ago Up About a minute 5001/tcp -docker-db-1 postgres:15-alpine "docker-entrypoint.s…" db About a minute ago Up About a minute (healthy) 5432/tcp -docker-nginx-`weaviate / db / redis / nginx / ssrf_proxy / sandbox` /docker-e…" nginx About a minute ago Up About a minute 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp -docker-redis-1 redis:6-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp -docker-sandbox-1 langgenius/dify-sandbox:0.2.1 "/main" sandbox About a minute ago Up About a minute -docker-ssrf_proxy-1 ubuntu/squid:latest "sh -c 'cp /docker-e…" ssrf_proxy About a minute ago Up About a minute 3128/tcp -docker-weaviate-1 semitechnologies/weaviate:1.19.0 "/bin/weaviate --hos…" weaviate About a minute ago ```bash docker compose ps -``` -docker-web-1 langgenius/dify-web:0.6.13 "/bin/sh ./entrypoin…" web About a minute ago Up About a minute ```bash +``` + +This includes 3 core services: `api / worker / web`, and 6 dependent components: `weaviate / db / redis / nginx / ssrf_proxy / sandbox` . + +```bash NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS docker-api-1 langgenius/dify-api:0.6.13 "/bin/bash /entrypoi…" api About a minute ago Up About a minute 5001/tcp docker-db-1 postgres:15-alpine "docker-entrypoint.s…" db About a minute ago Up About a minute (healthy) 5432/tcp @@ -114,20 +116,60 @@ docker-ssrf_proxy-1 ubuntu/squid:latest "sh -c 'cp /docker-e… docker-weaviate-1 semitechnologies/weaviate:1.19.0 "/bin/weaviate --hos…" weaviate About a minute ago Up About a minute docker-web-1 langgenius/dify-web:0.6.13 "/bin/sh ./entrypoin…" web About a minute ago Up About a minute 3000/tcp docker-worker-1 langgenius/dify-api:0.6.13 "/bin/bash /entrypoi…" worker About a minute ago Up About a minute 5001/tcp -``````bash +``` + +With these steps, you should be able to install Dify successfully. + +### Upgrade Dify + +Enter the docker directory of the dify source code and execute the following commands: + +```bash cd dify/docker docker compose down git pull origin main docker compose pull docker compose up -d -``````bash +``` + +#### Sync Environment Variable Configuration (Important) + +* If the `.env.example` file has been updated, be sure to modify your local `.env` file accordingly. +* Check and modify the configuration items in the `.env` file as needed to ensure they match your actual environment. You may need to add any new variables from `.env.example` to your `.env` file, and update any values that have changed. + +### Access Dify + +Access administrator initialization page to set up the admin account: + +```bash # Local environment http://localhost/install # Server environment http://your_server_ip/install -``````bash - cd dify/docker - ```0```bash - cd dify/docker - ```1 \ No newline at end of file +``` + +Dify web interface address: + +```bash +# Local environment +http://localhost + +# Server environment +http://your_server_ip +``` + +### Customize Dify + +Edit the environment variable values in your `.env` file directly. Then, restart Dify with: + +``` +docker compose down +docker compose up -d +``` + +The full set of annotated environment variables along can be found under docker/.env.example. + +### Read More + +If you have any questions, please refer to [FAQs](faqs.md). diff --git a/en/getting-started/install-self-hosted/faqs.mdx b/en/getting-started/install-self-hosted/faqs.mdx index deef7567..9faffbb9 100644 --- a/en/getting-started/install-self-hosted/faqs.mdx +++ b/en/getting-started/install-self-hosted/faqs.mdx @@ -5,7 +5,7 @@ title: FAQs ### 1. Not receiving reset password emails -You need to configure the `Mail``.env`eters in the `.env` file. For detailed instructions, please refer to ["Environment Variables Explanation: Mail-related configuration"](https://docs.dify.ai/getting-started/install-self-hosted/environments#mail-related-configuration). +You need to configure the `Mail` parameters in the `.env` file. For detailed instructions, please refer to ["Environment Variables Explanation: Mail-related configuration"](https://docs.dify.ai/getting-started/install-self-hosted/environments#mail-related-configuration). After modifying the configuration, run the following commands to restart the service: @@ -18,24 +18,23 @@ If you still haven't received the email, please check if the email service is wo ### 2. How to handle if the workflow is too complex and exceeds the node limit? -In the community edition, you can manually adjust the MAX\_TREE\_D`web/app/components/workflow/constants.ts.`app/components/workflow/constants.ts.` Our default value is 50, and it's important to note that excessively deep branches may affect performance in self-hosted scenarios. +In the community edition, you can manually adjust the MAX\_TREE\_DEPTH limit for single branch depth in `web/app/components/workflow/constants.ts.` Our default value is 50, and it's important to note that excessively deep branches may affect performance in self-hosted scenarios. ### 3. How to specify the runtime for each workflow node? -`TEXT_GENERATION_TIMEOUT_MS`NERATION_TIMEOUT_MS``.env`ble in the `.env` file to adjust the runtime for each node. This helps prevent overall application service unavailability caused by certain processes timing out. + +You can modify the `TEXT_GENERATION_TIMEOUT_MS` variable in the `.env` file to adjust the runtime for each node. This helps prevent overall application service unavailability caused by certain processes timing out. ### 4. How to reset the password of the admin account? -If you deployed using Docker Compose, you can reset the password with the following command while` -docker exec -it docker-a``` -docker exec -it docker-api-1 flask reset-password -```r the email address and the new password. Example: - -`ss and the new password. Example: +If you deployed using Docker Compose, you can reset the password with the following command while your Docker Compose is running: ``` -dify@my-pc:~/hello/dify/docker$ docker c` -dify@my-pc:~/hello/dify/docker$ docker compose up -d -[+] ``` +docker exec -it docker-api-1 flask reset-password +``` + +It will prompt you to enter the email address and the new password. Example: + +```bash dify@my-pc:~/hello/dify/docker$ docker compose up -d [+] Running 9/9 ✔ Container docker-web-1 Started 0.1s @@ -55,7 +54,13 @@ Email: hello@dify.ai New password: newpassword4567 Password confirm: newpassword4567 Password reset successfully. -```se, you can customize the access port by modifying the `ify the Nginx configuration: +``` + +### 5. How to Change the Port + +If you're using Docker Compose, you can customize the access port by modifying the `.env` configuration file. + +You need to modify the Nginx configuration: ```json EXPOSE_NGINX_PORT=80 @@ -63,14 +68,4 @@ EXPOSE_NGINX_SSL_PORT=443 ``` -Other self-host issue pleas` configuration file. - -You need to modify the Nginx configuration: - -`l-faq.md)。```json -EXPOSE_NGINX_PORT=80 -EXPOSE_NGINX_SSL_PORT=443 -````json -EXPOSE_NGINX_PORT=80 -EXPOSE_NGINX_SSL_PORT=443 -` \ No newline at end of file +Other self-host issue please check this document [Self-Host Related](../../learn-more/faq/install-faq.md)。 \ No newline at end of file diff --git a/en/getting-started/install-self-hosted/local-source-code.mdx b/en/getting-started/install-self-hosted/local-source-code.mdx index b279017e..218346b0 100644 --- a/en/getting-started/install-self-hosted/local-source-code.mdx +++ b/en/getting-started/install-self-hosted/local-source-code.mdx @@ -49,13 +49,19 @@ title: Local Source Code Start git clone https://github.com/langgenius/dify.git ``` -Before enabling business services, we need to first deploy PostgreSQL / Redis / Weaviate (if not locally available). We can start them with the follow`Bash -cd docker -cp middleware.env.```Bash +Before enabling business services, we need to first deploy PostgreSQL / Redis / Weaviate (if not locally available). We can start them with the following commands: + +```Bash cd docker cp middleware.env.example middleware.env docker compose -f docker-compose.middleware.yaml up -d -```Interface Service +``` + +--- + +### Server Deployment + +- API Interface Service - Worker Asynchronous Queue Consumption Service #### Installation of the basic environment: @@ -64,8 +70,6 @@ Server startup requires Python 3.12. It is recommended to use [pyenv](https://gi To install additional Python versions, use pyenv install. -`pyenv install. - ```Bash pyenv install 3.12 ``` @@ -73,9 +77,10 @@ pyenv install 3.12 To switch to the "3.12" Python environment, use the following command: ```Bash -pyenv global 3.12```Bash -pyenv install 3.12 -```: +pyenv global 3.12 +``` + +#### Follow these steps : 1. Navigate to the "api" directory: @@ -83,255 +88,189 @@ pyenv install 3.12 cd api ``` -> For macOS```Bash -pyenv global 3.12 -``` install libmagic`. +> For macOS: install libmagic with `brew install libmagic`. 1. Copy the environment variable configuration file: -CODE_BLOCK`Bash -pyenv install 3.12 -` ` + ``` + cp .env.example .env + ``` -To switch to the "3.12" Python environment, use the following command: - -`n the .env file: +2. Generate a random secret key and replace the value of SECRET_KEY in the .env file: ``` - CODE_BLOCK_PLACEHOLDER`Bash -pyenv global 3.12```Bash -pyenv install 3.12 -```: - -1. Navigate to the "api" directory: - - `. Install the ``` awk -v key="$(openssl rand -base64 42)" '/^SECRET_KEY=/ {sub(/=.*/, "=" key)} 1' .env > temp_env && mv temp_env .env - ````poetry shell` to activate theINLINE_CODE_PLACEHO` + ``` -> For macOS```Bash -pyenv global 3.12 -``` install libmagic` ``` +3. Install the required dependencies: + + Dify API service uses [Poetry](https://python-poetry.org/docs/) to manage dependencies. + + ``` + poetry env use 3.12 + poetry install + ``` 4. Perform the database migration: Perform database migration to the latest version: ``` - poetry shell - fl` - -2. Generate a random secret key and replace the value of SECRET_KEY in the .env file: - - `OLDER_7 ``` - * Debug mode: on - INFO:werkzeug:WARNING: This is a de` + poetry run flask db upgrade ``` - cp .env.example .env - ```2)" '/^SECRET_KEY=/ {sub(/=.*/, "=" key)} 1' .env > temp_env && mv temp_env .env - ` * Running on http://127.0``` - flask run --host 0.0.0.0 --port=5001 --debug - ```rkzeug: * Restarting with stat - WARN` -3. Install the ``` - awk -v key="$(openssl rand -base64 42)" '/^SECRET_KEY=/ {sub(/=.*/, "=" key)} 1' .env > temp_env && mv temp_env .env - ````ws system to start the Worker s`Bash -cd docker -cp middleware.env.```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```Interface Service -- Worker Asynchronous Queue Consumption Service +5. Start the API server: -#### Installation of the basic environment: + ``` + poetry run flask run --host 0.0.0.0 --port=5001 --debug + ``` -Server startup requires Python 3.12. It is recommended to use [pyenv](https://github.com/pyenv/pyenv) for quick installation of the Python environment. + output: -To install additional Python versions, use pyenv install. + ``` + * Debug mode: on + INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. + * Running on all addresses (0.0.0.0) + * Running on http://127.0.0.1:5001 + INFO:werkzeug:Press CTRL+C to quit + INFO:werkzeug: * Restarting with stat + WARNING:werkzeug: * Debugger is active! + INFO:werkzeug: * Debugger PIN: 695-801-919 + ``` -`0and instead:`Bash -cd docker -cp middleware.env.```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```Interface Service -- Worker Asynchronous Queue Consumption Service +6. Start the Worker service -#### Installation of the basic environment: + To consume asynchronous tasks from the queue, such as dataset file import and dataset document updates, follow these steps to start the Worker service on Linux or macOS: -Server startup requires Python 3.12. It is recommended to use [pyenv](https://github.com/pyenv/pyenv) for quick installation of the Python environment. + ``` + poetry run celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace + ``` -To install additional Python versions, use pyenv install. + If you are using a Windows system to start the Worker service, please use the following command instead: -`1--without-gossip --withou`Bash -cd docker -cp middleware.env.```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```Interface Service -- Worker Asynchronous Queue Consumption Service + ``` + poetry run celery -A app.celery worker -P solo --without-gossip --without-mingle -Q dataset,generation,mail,ops_trace --loglevel INFO + ``` -#### Installation of the basic environment: + output: -Server startup requires Python 3.12. It is recommended to use [pyenv](https://github.com/pyenv/pyenv) for quick installation of the Python environment. - -To install additional Python versions, use pyenv install. - -`2----------- celery@TAKATOST.lan v5.2.7 (dawn-chorus) + ``` + -------------- celery@TAKATOST.lan v5.2.7 (dawn-chorus) --- ***** ----- - `Bash -cd docker -cp middleware.env.```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```Interface Service -- Worker Asynchronous Queue Consumption Service + -- ******* ---- macOS-10.16-x86_64-i386-64bit 2023-07-31 12:58:08 + - *** --- * --- + - ** ---------- [config] + - ** ---------- .> app: app:0x7fb568572a10 + - ** ---------- .> transport: redis://:**@localhost:6379/1 + - ** ---------- .> results: postgresql://postgres:**@localhost:5432/dify + - *** --- * --- .> concurrency: 1 (gevent) + -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) + --- ***** ----- + -------------- [queues] + .> dataset exchange=dataset(direct) key=dataset + .> generation exchange=generation(direct) key=generation + .> mail exchange=mail(direct) key=mail + + [tasks] + . tasks.add_document_to_index_task.add_document_to_index_task + . tasks.clean_dataset_task.clean_dataset_task + . tasks.clean_document_task.clean_document_task + . tasks.clean_notion_document_task.clean_notion_document_task + . tasks.create_segment_to_index_task.create_segment_to_index_task + . tasks.deal_dataset_vector_index_task.deal_dataset_vector_index_task + . tasks.document_indexing_sync_task.document_indexing_sync_task + . tasks.document_indexing_task.document_indexing_task + . tasks.document_indexing_update_task.document_indexing_update_task + . tasks.enable_segment_to_index_task.enable_segment_to_index_task + . tasks.generate_conversation_summary_task.generate_conversation_summary_task + . tasks.mail_invite_member_task.send_invite_member_mail_task + . tasks.remove_document_from_index_task.remove_document_from_index_task + . tasks.remove_segment_from_index_task.remove_segment_from_index_task + . tasks.update_segment_index_task.update_segment_index_task + . tasks.update_segment_keyword_index_task.update_segment_keyword_index_task + + [2023-07-31 12:58:08,831: INFO/MainProcess] Connected to redis://:**@localhost:6379/1 + [2023-07-31 12:58:08,840: INFO/MainProcess] mingle: searching for neighbors + [2023-07-31 12:58:09,873: INFO/MainProcess] mingle: all alone + [2023-07-31 12:58:09,886: INFO/MainProcess] pidbox: Connected to redis://:**@localhost:6379/1. + [2023-07-31 12:58:09,890: INFO/MainProcess] celery@TAKATOST.lan ready. + ``` + +--- + +## Deploy the frontend page + +Start the web frontend client page service #### Installation of the basic environment: -Server startup requires Python 3.12. It is recommended to use [pyenv](https://github.com/pyenv/pyenv) for quick installation of the Python environment. +To start the web frontend service, you will need [Node.js v18.x (LTS)](http://nodejs.org/) and [NPM version 8.x.x](https://www.npmjs.com/) or [Yarn](https://yarnpkg.com/). -To install additional Python versions, use pyenv install. +- Install NodeJS + NPM -`3it 2023-07-31 12:58:`Bash -cd docker -cp middleware.env.```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```Interface Service -- Worker Asynchronous Queue Consumption Service +Please visit [https://nodejs.org/en/download](https://nodejs.org/en/download) and choose the installation package for your respective operating system that is v18.x or higher. It is recommended to download the stable version, which includes NPM by default. -#### Installation of the basic environment: +#### Follow these steps : -Server startup requires Python 3.12. It is recommended to use [pyenv](https://github.com/pyenv/pyenv) for quick installation of the Python environment. +1. Enter the web directory -To install additional Python versions, use pyenv install. + ``` + cd web + ``` -`4------- [c`Bash -cd docker -cp middleware.env.```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```Interface Service -- Worker Asynchronous Queue Consumption Service +2. Install the dependencies. -#### Installation of the basic environment: + ``` + npm install + ``` -Server startup requires Python 3.12. It is recommended to use [pyenv](https://github.com/pyenv/pyenv) for quick installation of the Python environment. - -To install additional Python versions, use pyenv install. - -`5x7fb568572a10 - - ** `Bash -cd docker -cp middleware.env.```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```Interface Service -- Worker Asynchronous Queue Consumption Service - -#### Installation of the basic environment: - -Server startup requires Python 3.12. It is recommended to use [pyenv](https://github.com/pyenv/pyenv) for quick installation of the Python environment. - -To install additional Python versions, use pyenv install. - -`6k.add_document_to_index_task - . tasks.clean_dataset_task.clean_dataset_ta```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```3 +3. Configure the environment variables. Create a file named .env.local in the current directory and copy the contents from .env.example. Modify the values of these environment variables according to your requirements: ``` # For production release, change this to PRODUCTION NEXT_PUBLIC_DEPLOY_ENV=DEVELOPMENT # The deployment edition, SELF_HOSTED or CLOUD NEXT_PUBLIC_EDITION=SELF_HOSTED - # The base URL of console application, refers to the Console base URL of WEB `Bash -cd docker -cp middleware.env.```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```Interface Service -- Worker Asynchronous Queue Consumption Service + # The base URL of console application, refers to the Console base URL of WEB service if console domain is + # different from api or web app domain. + # example: http://cloud.dify.ai/console/api + NEXT_PUBLIC_API_PREFIX=http://localhost:5001/console/api + # The URL for Web APP, refers to the Web App base URL of WEB service if web app domain is different from + # console or api domain. + # example: http://udify.app/api + NEXT_PUBLIC_PUBLIC_API_PREFIX=http://localhost:5001/api -#### Installation of the basic environment: + # SENTRY + NEXT_PUBLIC_SENTRY_DSN= + NEXT_PUBLIC_SENTRY_ORG= + NEXT_PUBLIC_SENTRY_PROJECT= + ``` -Server startup requires Python 3.12. It is recommended to use [pyenv](https://github.com/pyenv/pyenv) for quick installation of the Python environment. +4. Build the code -To install additional Python versions, use pyenv install. + ``` + npm run build + ``` -`7IC_API_PREFIX=http://localhost:5001/console/api - # The URL for Web APP, refers to the Web App base URL of INLINE_CODE_PLACEHOLDE`Bash -cd docker -cp middleware.env.```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```Interface Service -- Worker Asynchronous Queue Consumption Service +5. Start the web service -#### Installation of the basic environment: + ``` + npm run start + # or + yarn start + # or + pnpm start + ``` -Server startup requires Python 3.12. It is recommended to use [pyenv](https://github.com/pyenv/pyenv) for quick installation of the Python environment. +After successful startup, the terminal will output the following information: -To install additional Python versions, use pyenv install. - -`9your own risk. +``` +ready - started server on 0.0.0.0:3000, url: http://localhost:3000 +warn - You have enabled experimental feature (appDir) in next.config.js. +warn - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk. info - Thank you for testing `appDir` please leave your feedback at https://nextjs.link/app-feedback ``` ### Access Dify Finally, access [http://127.0.0.1:3000](http://127.0.0.1:3000/) to use the locally deployed Dify. -```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```4```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```5```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```6```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```7```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```8```Bash -cd docker -cp middleware.env.example middleware.env -docker compose -f docker-compose.middleware.yaml up -d -```9`Bash -pyenv install 3.12 -`0`Bash -pyenv install 3.12 -`1`Bash -pyenv install 3.12 -`2`Bash -pyenv install 3.12 -`3`Bash -pyenv install 3.12 -`4`Bash -pyenv install 3.12 -`5`Bash -pyenv install 3.12 -`6`Bash -pyenv install 3.12 -`7 \ No newline at end of file diff --git a/en/getting-started/install-self-hosted/start-the-frontend-docker-container.mdx b/en/getting-started/install-self-hosted/start-the-frontend-docker-container.mdx index f0ce7313..9b9625e3 100644 --- a/en/getting-started/install-self-hosted/start-the-frontend-docker-container.mdx +++ b/en/getting-started/install-self-hosted/start-the-frontend-docker-container.mdx @@ -19,12 +19,11 @@ docker run -it -p 3000:3000 -e CONSOLE_URL=http://127.0.0.1:5001 -e APP_URL=http cd web && docker build . -t dify-web ``` -2. Start the fronte` - 2. Start the frontend image - `CODE_BLOCK_PLA` - docker run -it -``` - cd web && docker build . -t dify-web - ```APP_URL=http://127.0.0.1:5001 dify-web - `HOLDER_2 you can visit [http://127.0.0.1:3000](http://127.0.0.1:3000/) + ``` + docker run -it -p 3000:3000 -e CONSOLE_URL=http://127.0.0.1:5001 -e APP_URL=http://127.0.0.1:5001 dify-web + ``` + +3. When the console domain and web app domain are different, you can set the CONSOLE_URL and APP_URL separately +4. To access it locally, you can visit [http://127.0.0.1:3000](http://127.0.0.1:3000/) diff --git a/en/guides/annotation/README.mdx b/en/guides/annotation/README.mdx new file mode 100644 index 00000000..f4147616 --- /dev/null +++ b/en/guides/annotation/README.mdx @@ -0,0 +1,5 @@ +--- +title: Annotation +--- + + diff --git a/en/guides/annotation/annotation-reply.mdx b/en/guides/annotation/annotation-reply.mdx new file mode 100644 index 00000000..2703ff62 --- /dev/null +++ b/en/guides/annotation/annotation-reply.mdx @@ -0,0 +1,87 @@ +--- +title: Annotation Reply +--- + + +The annotated replies feature provides customizable high-quality question-and-answer responses through manual editing and annotation. + +Applicable scenarios: + +* **Customized Responses for Specific Fields:** In customer service or knowledge base scenarios for enterprises, government, etc., service providers may want to ensure that certain specific questions are answered with definitive results. Therefore, it is necessary to customize the output for specific questions. For example, creating "standard answers" for certain questions or marking some questions as "unanswerable." +* **Rapid Tuning for POC or DEMO Products:** When quickly building prototype products, customized responses achieved through annotated replies can efficiently enhance the expected generation of Q\&A results, thereby improving customer satisfaction. + +The annotated replies feature essentially provides another set of retrieval-enhanced systems, allowing you to bypass the LLM generation phase and avoid the hallucination issues of RAG. + +### Workflow + +1. After enabling the annotated replies feature, you can annotate the responses from LLM conversations. You can add high-quality answers from LLM responses directly as annotations or edit a high-quality answer according to your needs. These edited annotations will be saved persistently. +2. When a user asks a similar question again, the system will vectorize the question and search for similar annotated questions. +3. If a match is found, the corresponding answer from the annotation will be returned directly, bypassing the LLM or RAG process. +4. If no match is found, the question will continue through the regular process (passing to LLM or RAG). +5. Once the annotated replies feature is disabled, the system will no longer match responses from annotations. + +![Annotated Replies Workflow](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/7bebcf85d52f65d5649956f47ed33d43.png) + +### Enabling Annotated Replies in Prompt Orchestration + +Enable the annotated replies switch by navigating to **“Orchestrate -> Add Features”**: + +![Enabling Annotated Replies in Prompt Orchestration](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/b467da1fbaa9beb22cfb2a987f51f653.png) + +When enabling, you need to set the parameters for annotated replies, which include: Score Threshold and Embedding Model. + +**Score Threshold:** This sets the similarity threshold for matching annotated replies. Only annotations with scores above this threshold will be recalled. + +**Embedding Model:** This is used to vectorize the annotated text. Changing the model will regenerate the embeddings. + +Click save and enable, and the settings will take effect immediately. The system will generate embeddings for all saved annotations using the embedding model. + +![Setting Parameters for Annotated Replies](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/a2c7b82a4f25a96fcdf68c807fb96812.png) + +### Adding Annotations in the Conversation Debug Page + +You can directly add or edit annotations on the model response information in the debug and preview pages. + +![Adding Annotated Replies](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/e064e3dcca3f04e16f5269b169820d2d.png) + +Edit the response to the high-quality reply you need and save it. + +![Editing Annotated Replies](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/b79aabe6e9b336e26ca409a49526501e.png) + +Re-enter the same user question, and the system will use the saved annotation to reply to the user's question directly. + +![Replying to User Questions with Saved Annotations](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/810f640d184227f4918ee197ff906203.png) + +### Enabling Annotated Replies in Logs and Annotations + +Enable the annotated replies switch by navigating to “Logs & Ann. -> Annotations”: + +![Enabling Annotated Replies in Logs and Annotations](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/c74951765f078392924da901008eb815.png) + +### Setting Parameters for Annotated Replies in the Annotation Backend + +The parameters that can be set for annotated replies include: Score Threshold and Embedding Model. + +**Score Threshold:** This sets the similarity threshold for matching annotated replies. Only annotations with scores above this threshold will be recalled. + +**Embedding Model:** This is used to vectorize the annotated text. Changing the model will regenerate the embeddings. + +![Setting Parameters for Annotated Replies](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/5bbd94402452e3f4ecc29eb398591585.png) + +### Bulk Import of Annotated Q\&A Pairs + +In the bulk import feature, you can download the annotation import template, edit the annotated Q\&A pairs according to the template format, and then import them in bulk. + +![Bulk Import of Annotated Q&A Pairs](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/ad6497dbe8c93fe9988cf76775434a7c.png) + +### Bulk Export of Annotated Q\&A Pairs + +Through the bulk export feature, you can export all saved annotated Q\&A pairs in the system at once. + +![Bulk Export of Annotated Q&A Pairs](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/4d80d0a9b8056711a2dcdf664c19e840.png) + +### Viewing Annotation Hit History + +In the annotation hit history feature, you can view the edit history of all hits on the annotation, the user's hit questions, the response answers, the source of the hits, the matching similarity scores, the hit time, and other information. You can use this information to continuously improve your annotated content. + +![Viewing Annotation Hit History](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/26b6c37dcff225201ea5b4fb712b2d4d.png) diff --git a/en/guides/annotation/logs.mdx b/en/guides/annotation/logs.mdx new file mode 100644 index 00000000..b68139b8 --- /dev/null +++ b/en/guides/annotation/logs.mdx @@ -0,0 +1,38 @@ +--- +title: Logs and Annotation +--- + + + +Please ensure that your application complies with local regulations when collecting user data. The common practice is to publish a privacy policy and obtain user consent. + + +The **Logs** feature is designed to observe and annotate the performance of Dify applications. Dify records logs for all interactions with the application, whether through the WebApp or API. If you are a Prompt Engineer or LLM operator, it will provide you with a visual experience of LLM application operations. + +### Using the Logs Console + +You can find the Logs in the left navigation of the application. This page typically displays: + +* Interaction records between users and AI within the selected timeframe +* The results of user input and AI output, which for conversational applications are usually a series of message flows +* Ratings from users and operators, as well as improvement annotations from operators + +The logs currently do not include interaction records from the Prompt debugging process. + +> For the Free tier teams, interaction logs are only retained for the last 30 days. To keep interaction history for a longer period, please visit our [pricing page](https://dify.ai/pricing) to upgrade to a higher tier or consider deploying the [Community Edition](https://docs.dify.ai/getting-started/install-self-hosted/docker-compose). + +### Improvement Annotations + + +These annotations will be used for model fine-tuning in future versions of Dify to improve model accuracy and response style. The current preview version only supports annotations. + + +![Mark logs to improve your app](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/annotation/a17d81cd50a6788df3bf6853cc963d0b.png) + +Clicking on a log entry will open the log details panel on the right side of the interface. In this panel, operators can annotate an interaction: + +* Give a thumbs up for well-performing messages +* Give a thumbs down for poorly-performing messages +* Mark improved responses for improvement, which represents the text you expect AI to reply with + +Please note that if multiple administrators in the team annotate the same log entry, the last annotation will overwrite the previous ones. diff --git a/en/user-guide/build-app/agent.mdx b/en/guides/application-orchestrate/agent.mdx similarity index 100% rename from en/user-guide/build-app/agent.mdx rename to en/guides/application-orchestrate/agent.mdx diff --git a/en/guides/application-orchestrate/app-toolkits/moderation-tool.mdx b/en/guides/application-orchestrate/app-toolkits/moderation-tool.mdx new file mode 100644 index 00000000..dd252bbc --- /dev/null +++ b/en/guides/application-orchestrate/app-toolkits/moderation-tool.mdx @@ -0,0 +1,27 @@ +--- +title: Moderation Tool +--- + +In our interactions with AI applications, we often have stringent requirements in terms of content security, user experience, and legal regulations. At this point, we need the "Sensitive Word Review" feature to create a better interactive environment for end-users. On the orchestration page, click "Add Feature" and locate the "Content Review" toolbox at the bottom: + +![Content moderation](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/app-toolkits/63c5c0be18933a0578edf72c2eed1609.png) + +## Call the OpenAI Moderation API + +OpenAI, along with most companies providing LLMs, includes content moderation features in their models to ensure that outputs do not contain controversial content, such as violence, sexual content, and illegal activities. Additionally, OpenAI has made this content moderation capability available, which you can refer to [OpenAI's Moderation](https://platform.openai.com/docs/guides/moderation/overview). + +Now you can also directly call the OpenAI Moderation API on Dify; you can review either input or output content simply by entering the corresponding "preset reply." + +![Configuring Load Balancing from Add Model](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/app-toolkits/959415a64f32ee28461441847a96ae64.png) + +## Keywords + +Developers can customize the sensitive words they need to review, such as using "kill" as a keyword to perform an audit action when users input. The preset reply content should be "The content is violating usage policies." It can be anticipated that when a user inputs a text chuck containing "kill" at the terminal, it will trigger the sensitive word review tool and return the preset reply content. + +![Configuring Load Balancing from Add Model](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/app-toolkits/15652f65c3af15d931598a866a7baa17.png) + +## Moderation Extension + +Different enterprises often have their own mechanisms for sensitive word moderation. When developing their own AI applications, such as an internal knowledge base ChatBot, enterprises need to moderate the query content input by employees for sensitive words. For this purpose, developers can write an API extension based on their enterprise's internal sensitive word moderation mechanisms, which can then be called on Dify to achieve a high degree of customization and privacy protection for sensitive word review. + +![Moderation Extension](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/app-toolkits/d101ff5542c53b7ca6f8d57d24193916.png) diff --git a/en/guides/application-orchestrate/app-toolkits/readme.mdx b/en/guides/application-orchestrate/app-toolkits/readme.mdx new file mode 100644 index 00000000..739e710e --- /dev/null +++ b/en/guides/application-orchestrate/app-toolkits/readme.mdx @@ -0,0 +1,37 @@ +--- +title: Application Toolkits +--- + +In **Application Orchestration**, click **Add Feature** to open the application toolbox. + +The application toolbox provides various additional features for Dify's [applications](../#application_type): + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/app-toolkits/d74a330f5d7723a8aa8c0ca920d0d49a.png) + +### Conversation Opening + +In conversational applications, the AI will proactively say the first sentence or ask a question. You can edit the content of the opening, including the initial question. Using conversation openings can guide users to ask questions, explain the application background, and lower the barrier for initiating a conversation. + +![Conversation Opening](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/app-toolkits/e5362d03d059d2837c653bff6fdf9964.png) + +### Next Step Question Suggestions + +Setting next step question suggestions allows the AI to generate 3 follow-up questions based on the previous conversation, guiding the next round of interaction. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/app-toolkits/cd5b2fab9415165d7b52ca1461055ce2.png) + +### Citation and Attribution + +When this feature is enabled, the large language model will cite content from the knowledge base when responding to questions. You can view specific citation details below the response, including the original text segment, segment number, and match score. + +For more details, please see [Citation and Attribution](https://docs.dify.ai/guides/knowledge-base/retrieval-test-and-citation#id-2.-citation-and-attribution). + +### Content Moderation + +During interactions with AI applications, we often have stringent requirements regarding content safety, user experience, and legal regulations. In such cases, we need the "Sensitive Content Moderation" feature to create a better interaction environment for end users. + +### Annotated Replies + +The annotated replies feature allows for customizable high-quality Q\&A responses through manual editing and annotation. + +See [Annotated Replies](../../annotation/annotation-reply.md). diff --git a/en/user-guide/build-app/chatbot.mdx b/en/guides/application-orchestrate/chatbot.mdx similarity index 100% rename from en/user-guide/build-app/chatbot.mdx rename to en/guides/application-orchestrate/chatbot.mdx diff --git a/en/guides/application-orchestrate/creating-an-application.mdx b/en/guides/application-orchestrate/creating-an-application.mdx new file mode 100644 index 00000000..27311de9 --- /dev/null +++ b/en/guides/application-orchestrate/creating-an-application.mdx @@ -0,0 +1,57 @@ +--- +title: Create Application +--- + +You can create applications in Dify's studio in three ways: + +* Create based on an application template (recommended for beginners) +* Create a blank application +* Create application via DSL file (Local/Online) + +### Creating an Application from a Template + +When using Dify for the first time, you might be unfamiliar with creating applications. To help new users quickly understand what types of applications can be built on Dify, the prompt engineers from the Dify team have already created high-quality application templates for multiple scenarios. + +You can select "Studio" from the navigation menu, then choose "Create from Template" in the application list. + +![Create an application from a template](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/5a29c89223d559eb67801d57895628c1.png) + +Select any template and click **Use this template.** + +![Dify application templates](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/2668f878d105cf7a9f2cb29ee4e8eb9f.png) + +### Creating a New Application + +If you need to create a blank application on Dify, you can select "Studio" from the navigation and then choose "Create from Blank" in the application list. + +![Create a blank application](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/f3fac01ad131b23ff8f45fa81a40d8a6.png) + +When creating an application for the first time, you might need to first understand the [basic concepts](./#application_type) of the four different types of applications on Dify: Chatbot, Text Generator, Agent, Chatflow and Workflow. + +When selecting a specific application type, you can customize it by providing a name, choosing an appropriate icon(or uploading your favorite image as an icon), and writing a clear and concise description of its purpose. These details will help team members easily understand and use the application in the future. + +![](https://assets-docs.dify.ai/2024/12/8012e6ed06bfb10b239a4b999b1a0787.png) + +### Creating from a DSL File + + +Dify DSL is an AI application engineering file standard defined by Dify.AI. The file format is YML. This standard covers the basic description of the application, model parameters, orchestration configuration, and other information. + + +#### Import local DSL file + +If you have obtained a template (DSL file) from the community or others, you can choose "Import DSL File" from the studio. After importing, all configuration information of the original application will be loaded directly. + +![Create an application by importing a DSL file](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/1cfbe4604896c25cbb6c71bf38f1c148.png) + +#### Import DSL file from URL + +You can also import DSL files via a URL, using the following link format: + +```url +https://example.com/your_dsl.yml +``` + +![Create an application by importing a DSL file](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-orchestrate/557be7a176fba979b7f7327d6a0cf8e4.png) + +> When importing a DSL file, the version will be checked. Significant discrepancies between DSL versions may lead to compatibility issues. For more details, please refer to [Application Management: Import](https://docs.dify.ai/guides/management/app-management#importing-application). diff --git a/en/guides/application-orchestrate/readme.mdx b/en/guides/application-orchestrate/readme.mdx new file mode 100644 index 00000000..cc3083df --- /dev/null +++ b/en/guides/application-orchestrate/readme.mdx @@ -0,0 +1,82 @@ +--- +title: Introduction +--- + +In Dify, an "application" refers to a practical scenario application built on large language models like GPT. By creating an application, you can apply intelligent AI technology to specific needs. It encompasses both the engineering paradigm for developing AI applications and the specific deliverables. + +In short, an application provides developers with: + +* A user-friendly API that can be directly called by backend or frontend applications, authenticated via Token +* A ready-to-use, aesthetically pleasing, and hosted WebApp, which you can further develop using the WebApp template +* An easy-to-use interface that includes prompt engineering, context management, log analysis, and annotation + +You can choose **any one** or **all** of these to support your AI application development. + +### Application Types + +Dify offers five types of applications: + +* **Chatbot**: A conversational assistant built on LLM +* **Text Generator**: An assistant for text generation tasks such as writing stories, text classification, translation, etc. +* **Agent**: A conversational intelligent assistant capable of task decomposition, reasoning, and tool invocation +* **Chatflow**: A workflow orchestration for multi-round complex dialogue tasks with memory capabilities +* **Workflow**: A workflow orchestration for single-round tasks like automation and batch processing + +The differences between Text Generator and Chatbot are shown in the table below: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Text GeneratorChatbot
WebApp InterfaceForm + ResultsChat-based
WebAPI Endpointcompletion-messageschat-messages
Interaction ModeOne question, one answerMulti-turn conversation
Streaming ResultsSupportedSupported
Context PreservationPer sessionContinuous
User Input FormSupportedSupported
Datasets and PluginsSupportedSupported
AI Opening RemarksNot supportedSupported
Example ScenariosTranslation, judgment, indexingChatting
diff --git a/en/user-guide/build-app/text-generator.mdx b/en/guides/application-orchestrate/text-generator.mdx similarity index 99% rename from en/user-guide/build-app/text-generator.mdx rename to en/guides/application-orchestrate/text-generator.mdx index 98f8fb25..8a9c6a8b 100644 --- a/en/user-guide/build-app/text-generator.mdx +++ b/en/guides/application-orchestrate/text-generator.mdx @@ -1,6 +1,5 @@ --- title: Text Generation Application -version: 'English' --- A text generation application is a type of application specifically designed to produce content in specific formats. These applications allow users to input specific requirements or parameters and automatically generate text output that conforms to preset formats. Unlike chat assistants that can maintain continuous conversations, text generation applications primarily process single inputs to provide one-time content generation services, with "Prompt Generator" being a typical example. diff --git a/en/guides/application-publishing/README.mdx b/en/guides/application-publishing/README.mdx new file mode 100644 index 00000000..b5f5e31e --- /dev/null +++ b/en/guides/application-publishing/README.mdx @@ -0,0 +1,11 @@ +--- +title: Launching Dify Apps +--- + + +For more detailed information, please refer to the following sections: + +- [Publish as a Single-page Webapp](launch-your-webapp-quickly/) +- [Embedding In Websites](embedding-in-websites.md) +- [Developing with APIs](developing-with-apis.md) +- [Based on Frontend Templates](based-on-frontend-templates.md) \ No newline at end of file diff --git a/en/guides/application-publishing/based-on-frontend-templates.mdx b/en/guides/application-publishing/based-on-frontend-templates.mdx new file mode 100644 index 00000000..77bf9a39 --- /dev/null +++ b/en/guides/application-publishing/based-on-frontend-templates.mdx @@ -0,0 +1,44 @@ +--- +title: Based on WebApp Template +--- + + +If developers are developing new products from scratch or in the product prototype design phase, you can quickly launch AI sites using Dify. At the same time, Dify hopes that developers can fully freely create different forms of front-end applications. For this reason, we provide: + +* **SDK** for quick access to the Dify API in various languages +* **WebApp Template** for WebApp development scaffolding for each type of application + +The WebApp Templates are open source under the MIT license. You are free to modify and deploy them to achieve all the capabilities of Dify or as a reference code for implementing your own App. + +You can find these Templates on GitHub: + +* [Conversational app](https://github.com/langgenius/webapp-conversation) +* [Text generation app](https://github.com/langgenius/webapp-text-generator) + +The fastest way to use the WebApp Template is to click "**Use this template**" on GitHub, which is equivalent to forking a new repository. Then you need to configure the Dify App ID and API Key, like this: + +```javascript +export const APP_ID = '' +export const API_KEY = '' +``` + +More config in `config/index.ts`: + +``` +export const APP_INFO: AppInfo = { + "title": 'Chat APP', + "description": '', + "copyright": '', + "privacy_policy": '', + "default_language": 'zh-Hans' +} + +export const isShowPrompt = true +export const promptTemplate = '' +``` + +> The App ID can be obtained from the App's URL, where the long string characters is the unique App ID. + +Each WebApp Template provides a README file containing deployment instructions. Usually, WebApp Templates contain a lightweight backend service to ensure that developers' API keys are not directly exposed to users. + +These WebApp Templates can help you quickly build prototypes of AI applications and use all the capabilities of Dify. If you develop your own applications or new templates based on them, feel free to share with us. diff --git a/en/guides/application-publishing/developing-with-apis.mdx b/en/guides/application-publishing/developing-with-apis.mdx new file mode 100644 index 00000000..18acd523 --- /dev/null +++ b/en/guides/application-publishing/developing-with-apis.mdx @@ -0,0 +1,127 @@ +--- +title: Developing with APIs +--- + + +Dify offers a "Backend-as-a-Service" API, providing numerous benefits to AI application developers. This approach enables developers to access the powerful capabilities of large language models (LLMs) directly in frontend applications without the complexities of backend architecture and deployment processes. + +### Benefits of using Dify API + +* Allow frontend apps to securely access LLM capabilities without backend development +* Design applications visually with real-time updates across all clients +* Well-encapsulated original LLM APIs +* Effortlessly switch between LLM providers and centrally manage API keys +* Operate applications visually, including log analysis, annotation, and user activity observation +* Continuously provide more tools, plugins, and knowledge + +### How to use + +Choose an application, and find the API Access in the left-side navigation of the Apps section. On this page, you can view the API documentation provided by Dify and manage credentials for accessing the API. + +You can create multiple access credentials for an application to deliver to different users or developers. This means that API users can use the AI capabilities provided by the application developer, but the underlying Prompt engineering, knowledge, and tool capabilities are encapsulated. + + +In best practices, API keys should be called through the backend, rather than being directly exposed in plaintext within frontend code or requests. This helps prevent your application from being abused or attacked. + + +For example, if you're a developer in a consulting company, you can offer AI capabilities based on the company's private database to end-users or developers, without exposing your data and AI logic design. This ensures a secure and sustainable service delivery that meets business objectives. + +### Text-generation application + +These applications are used to generate high-quality text, such as articles, summaries, translations, etc., by calling the completion-messages API and sending user input to obtain generated text results. The model parameters and prompt templates used for generating text depend on the developer's settings in the Dify Prompt Arrangement page. + +You can find the API documentation and example requests for this application in **Applications -> Access API**. + +For example, here is a sample call an API for text generation: + + + + ``` +curl --location --request POST 'https://api.dify.ai/v1/completion-messages' \ +--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "inputs": {}, + "response_mode": "streaming", + "user": "abc-123" +}' +``` + + + ```python +import requests +import json + +url = "https://api.dify.ai/v1/completion-messages" + +headers = { + 'Authorization': 'Bearer ENTER-YOUR-SECRET-KEY', + 'Content-Type': 'application/json', +} + +data = { + "inputs": {"text": 'Hello, how are you?'}, + "response_mode": "streaming", + "user": "abc-123" +} + +response = requests.post(url, headers=headers, data=json.dumps(data)) + +print(response.text) +``` + + + +### Conversational Applications + +Conversational applications facilitate ongoing dialogue with users through a question-and-answer format. To initiate a conversation, you will call the `chat-messages` API. A `conversation_id` is generated for each session and must be included in subsequent API calls to maintain the conversation flow. + +#### Key Considerations for `conversation_id`: + +- **Generating the `conversation_id`:** When starting a new conversation, leave the `conversation_id` field empty. The system will generate and return a new `conversation_id`, which you will use in future interactions to continue the dialogue. +- **Handling `conversation_id` in Existing Sessions:** Once a `conversation_id` is generated, future calls to the API should include this `conversation_id` to ensure the conversation continuity with the Dify bot. When a previous `conversation_id` is passed, any new `inputs` will be ignored. Only the `query` is processed for the ongoing conversation. +- **Managing Dynamic Variables:** If there is a need to modify logic or variables during the session, you can use conversation variables (session-specific variables) to adjust the bot's behavior or responses. + +You can access the API documentation and example requests for this application in **Applications -> Access API**. + +Here is an example of calling the `chat-messages` API: + + + + ``` +curl --location --request POST 'https://api.dify.ai/v1/chat-messages' \ +--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "inputs": {}, + "query": "eh", + "response_mode": "streaming", + "conversation_id": "1c7e55fb-1ba2-4e10-81b5-30addcea2276", + "user": "abc-123" +}' +``` + + + ```python +import requests +import json + +url = 'https://api.dify.ai/v1/chat-messages' +headers = { + 'Authorization': 'Bearer ENTER-YOUR-SECRET-KEY', + 'Content-Type': 'application/json', +} +data = { + "inputs": {}, + "query": "eh", + "response_mode": "streaming", + "conversation_id": "1c7e55fb-1ba2-4e10-81b5-30addcea2276", + "user": "abc-123" +} + +response = requests.post(url, headers=headers, data=json.dumps(data)) + +print(response.text()) +``` + + diff --git a/en/guides/application-publishing/embedding-in-websites.mdx b/en/guides/application-publishing/embedding-in-websites.mdx new file mode 100644 index 00000000..6f1a31f7 --- /dev/null +++ b/en/guides/application-publishing/embedding-in-websites.mdx @@ -0,0 +1,139 @@ +--- +title: Embedding In Websites +--- + + +Dify Apps can be embedded in websites using an iframe. This allows you to integrate your Dify App into your website, blog, or any other web page. + +When use Dify Chatbot Bubble Button embed in your website, you can customize the button style, position, and other settings. + +## Customizing the Dify Chatbot Bubble Button + +The Dify Chatbot Bubble Button can be customized through the following configuration options: + +```javascript +window.difyChatbotConfig = { + // Required, automatically generated by Dify + token: 'YOUR_TOKEN', + // Optional, default is false + isDev: false, + // Optional, when isDev is true, default is 'https://dev.udify.app', otherwise default is 'https://udify.app' + baseUrl: 'YOUR_BASE_URL', + // Optional, It can accept any valid HTMLElement attribute other than `id`, such as `style`, `className`, etc + containerProps: {}, + // Optional, If or not the button is allowed to be dragged, default is `false` + draggable: false, + // Optional, The axis along which the button is allowed to be dragged, default is `both`, can be `x`, `y`, `both` + dragAxis: 'both', + // Optional, An object of inputs that set in the dify chatbot + inputs: { + // key is the variable name + // e.g. + // name: "NAME" + } +} +``` + +## Overriding Default Button Styles + +You can override the default button style using CSS variables or the `containerProps` option. Apply these methods based on CSS specificity to achieve your desired customizations. + +### 1.Modifying CSS Variables + +The following CSS variables are supported for customization: + +```css +/* Button distance to bottom, default is `1rem` */ +--dify-chatbot-bubble-button-bottom + +/* Button distance to right, default is `1rem` */ +--dify-chatbot-bubble-button-right + +/* Button distance to left, default is `unset` */ +--dify-chatbot-bubble-button-left + +/* Button distance to top, default is `unset` */ +--dify-chatbot-bubble-button-top + +/* Button background color, default is `#155EEF` */ +--dify-chatbot-bubble-button-bg-color + +/* Button width, default is `50px` */ +--dify-chatbot-bubble-button-width + +/* Button height, default is `50px` */ +--dify-chatbot-bubble-button-height + +/* Button border radius, default is `25px` */ +--dify-chatbot-bubble-button-border-radius + +/* Button box shadow, default is `rgba(0, 0, 0, 0.2) 0px 4px 8px 0px)` */ +--dify-chatbot-bubble-button-box-shadow + +/* Button hover transform, default is `scale(1.1)` */ +--dify-chatbot-bubble-button-hover-transform +``` + +To change the background color to #ABCDEF, add this CSS: + +```css +#dify-chatbot-bubble-button { + --dify-chatbot-bubble-button-bg-color: #ABCDEF; +} +``` + +### 2.Using `containerProps` + +Set inline styles using the `style` attribute: + +```javascript +window.difyChatbotConfig = { + // ... other configurations + containerProps: { + style: { + backgroundColor: '#ABCDEF', + width: '60px', + height: '60px', + borderRadius: '30px', + }, + // For minor style overrides, you can also use a string value for the `style` attribute: + // style: 'background-color: #ABCDEF; width: 60px;', + }, +} +``` + +Apply CSS classes using the `className` attribute: + +```javascript +window.difyChatbotConfig = { + // ... other configurations + containerProps: { + className: 'dify-chatbot-bubble-button-custom my-custom-class', + }, +} +``` + +### 3. Passing `inputs` + +There are four types of inputs supported: + +1. **`text-input`**: Accepts any value. The input string will be truncated if its length exceeds the maximum allowed length. +2. **`paragraph`**: Similar to `text-input`, it accepts any value and truncates the string if it's longer than the maximum length. +3. **`number`**: Accepts a number or a numerical string. If a string is provided, it will be converted to a number using the `Number` function. +4. **`options`**: Accepts any value, provided it matches one of the pre-configured options. + +Example configuration: + +```javascript +window.difyChatbotConfig = { + // Other configuration settings... + inputs: { + name: 'apple', + }, +} +``` + +Note: When using the embed.js script to create an iframe, each input value will be processed—compressed using GZIP and encoded in base64—before being appended to the URL. + +For example, the URL with processed input values will look like this: +`http://localhost/chatbot/{token}?name=H4sIAKUlmWYA%2FwWAIQ0AAACDsl7gLuiv2PQEUNAuqQUAAAA%3D` \ No newline at end of file diff --git a/en/guides/application-publishing/launch-your-webapp-quickly/README.mdx b/en/guides/application-publishing/launch-your-webapp-quickly/README.mdx new file mode 100644 index 00000000..b2826606 --- /dev/null +++ b/en/guides/application-publishing/launch-your-webapp-quickly/README.mdx @@ -0,0 +1,45 @@ +--- +title: Publish as a Single-page Web App +--- + + +One of the benefits of creating AI applications with Dify is that you can publish a Single-page AI web app accessible to all users on the internet within minutes. + +* If you're using the self-hosted open-source version, the application will run on your server +* If you're using the cloud service, the application will be hosted at [https://udify.app/](https://udify.app/). + +### Publishing an AI Website + +Toggle the **"In service / Disabled"** switch, your Web App URL will be effective immediately publicly shared on the internet. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/7ef249e00d089beb981d4e02e7207a42.png) + +We have pre-set Web App UI for the following two types of applications: + +* **Text Generation (Preview)** + + + text-generator.md + + +* **Conversation (Preview)** + + + conversation-application.md + + +### Setting Up Your AI Site + +You can modify the language, color theme, copyright ownership, privacy policy link, and disclaimer by clicking the "setting" button. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/ad3f17240475d87becafa6acf5b040db.png) + +Currently, Web App supports multiple languages: English, Simplified Chinese, Traditional Chinese, Portuguese, German, Japanese, Korean, Ukrainian, and Vietnamese. If you want more languages to be supported, you can submit an Issue on GitHub to seek support or submit a PR to contribute code. + + + web-app-settings.md + + +### Embedding Your AI Site + +You can also integrate Dify Web App into your own web project, blog, or any other web page. For more details, please take a refer to [Embedding In Websites](https://docs.dify.ai/guides/application-publishing/embedding-in-websites). diff --git a/en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.mdx b/en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.mdx new file mode 100644 index 00000000..91de808f --- /dev/null +++ b/en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.mdx @@ -0,0 +1,53 @@ +--- +title: Conversation Application +--- + + +Conversational applications engage in continuous dialogue with users in a question-and-answer format. These applications support the following features (ensure these functions are enabled during application orchestration): + +* Variables filled out before the conversation. +* Creation, pinning, and deletion of conversations. +* Conversation opening statements. +* Next step question suggestions. +* Speech-to-text. +* References and attributions. + +### Variables Filled Out Before the Conversation + +If you have set variable filling requirements during application orchestration, you will need to fill out the prompted information before entering the conversation window: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/8decae00eeea24622e1f2ef73d4c447e.png) + +Fill in the necessary details and click the "Start Conversation" button to begin chatting. Hover over the AI's response to copy the conversation content, and provide "like" or "dislike" feedback. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/5b7a6f950ed8a2ce3a705f362b4813fe.png) + +### Creation, Pinning, and Deletion of Conversations + +Click the "New Conversation" button to start a new conversation. Hover over a conversation to pin or delete it. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/46372ad4d79a3ea943d43f9434974956.png) + +### Conversation Opener + +If the "Conversation Opener" feature is enabled on the application orchestration page, the AI application will automatically initiate the first line of dialogue when a new conversation is created. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/22e59e509296d25eb85cbd541e161c6d.png) + +### Follow Up + +If the "Follow-up" feature is enabled on the application orchestration page, the system will automatically generate 3 relevant question suggestions after the conversation: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/f88a7ffd777d51299f8b604249c044b3.png) + +### Speech-to-Text + +If the "Speech-to-Text" feature is enabled during application orchestration, you will see a speech input icon in the input box of the web application. Click the icon to convert speech to text: + +_Please ensure that your device environment is authorized to use the microphone._ + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/3a64c79792f1166301403f6c44cf4c85.png) + +### References and Attributions + +When testing the knowledge base effect within the application, you can go to **Workspace -- Add Function -- Citation and Attribution** to enable the citation attribution feature. For detailed instructions, please refer to [Citation and Attribution](https://docs.dify.ai/guides/knowledge-base/retrieval-test-and-citation#id-2.-citation-and-attribution). diff --git a/en/guides/application-publishing/launch-your-webapp-quickly/text-generator.mdx b/en/guides/application-publishing/launch-your-webapp-quickly/text-generator.mdx new file mode 100644 index 00000000..f01e4a2d --- /dev/null +++ b/en/guides/application-publishing/launch-your-webapp-quickly/text-generator.mdx @@ -0,0 +1,61 @@ +--- +title: Text Generator Application +--- + + +The text generation application is an application that automatically generates high-quality text according to the prompts provided by the user. It can generate various types of text, such as article summaries, translations, etc. + +Text generation applications support the following features: + +1. Run it once. +2. Run in batches. +3. Save the run results. +4. Generate more similar results. + +Let's introduce them separately. + +### Run it once + +Enter the query content, click the run button, and the result will be generated on the right, as shown in the following figure: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/4c5380cf71066d933082f7c30deacb01.png) + +In the generated results section, click the "Copy" button to copy the content to the clipboard. Click the "Save" button to save the content. You can see the saved content in the "Saved" tab. You can also "like" and "dislike" the generated content. + +### Run in batches + +Sometimes, we need to run an application many times. For example: There is a web application that can generate articles based on topics. Now we want to generate 100 articles on different topics. Then this task has to be done 100 times, which is very troublesome. Also, you have to wait for one task to complete before starting the next one. + +In the above scenario, the batch operation function is used, which is convenient to operate (enter the theme into a `csv` file, only need to be executed once), and also saves the generation time (multiple tasks run at the same time). The usage is as follows: + +#### Step 1 Enter the batch run page + +Click the "Run Batch" tab to enter the batch run page. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/c8381ab7fad14a54c86835dc4b1b6b5d.png) + +#### Step 2 Download the template and fill in the content + +Click the **"Download the template here"** button to obtain the template file. Edit the file and fill in the required content, then save it as a `.csv` file. Finally, upload the completed file back to Dify. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/bae4859c5cb7404ce901b7979237bb93.png) + +#### Step 3 Upload the file and run + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/fc84f62f41c12e14ff85b29e6bf43d27.png) + +If you need to export the generated content, you can click the download "button" in the upper right corner to export as a `csv` file. + +**Note:** The encoding of the uploaded `csv` file must be `Unicode` encoding. Otherwise, the result will fail. Solution: When exporting to a `csv` file with Excel, WPS, etc., select `Unicode` for encoding. + +### Save run results + +Click the "Save" button below the generated results to save the running results. In the "Saved" tab, you can see all saved content. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/3cdd15e87aa1f1aae9f6abadb0f16d1f.png) + +### Generate more similar results + +If the "More like this" function is turned on the App's Orchestrate page,clicking the "More like this" button in the web application generates content similar to the current result. As shown below: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/application-publishing/launch-your-webapp-quickly/65fb111d8e89a8f7b761859265e42f0a.png) diff --git a/en/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.mdx b/en/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.mdx new file mode 100644 index 00000000..57512a61 --- /dev/null +++ b/en/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.mdx @@ -0,0 +1,32 @@ +--- +title: Overview +--- + + +Web applications are designed for application users. When an application developer creates an application on Dify, a corresponding web application is generated. Users of the web application can use it without logging in. The web application is adapted for various device sizes: PC, tablet, and mobile. + +The content of the web application aligns with the configuration of the published application. When the application's configuration is modified and the "Publish" button is clicked on the prompt orchestration page, the web application's content will be updated according to the current configuration of the application. + +On the application overview page, you can enable or disable access to the web application and modify the web application's site information, including: + +* Icon +* Name +* Application description +* Interface language +* Copyright information +* Privacy policy link + +The functionality and performance of the web application depend on whether the developer has enabled these features during application orchestration, such as: + +* Conversation opening remarks +* Variables to be filled before the conversation +* Next step suggestions +* Speech-to-text +* References and attributions +* More similar answers (for text-based applications) +* ...... + +In the following sections, we will introduce the two types of web applications: + +* Text Generation +* Conversational \ No newline at end of file diff --git a/en/guides/extension/README.mdx b/en/guides/extension/README.mdx new file mode 100644 index 00000000..1d202911 --- /dev/null +++ b/en/guides/extension/README.mdx @@ -0,0 +1,15 @@ +--- +title: Extension +--- + + +In the process of creating AI applications, developers face constantly changing business needs and complex technical challenges. Effectively leveraging extension capabilities can not only enhance the flexibility and functionality of applications but also ensure the security and compliance of enterprise data. Dify offers the following two methods of extension: + + + api-based-extension + + + + code-based-extension + + diff --git a/en/guides/extension/api-based-extension/README.mdx b/en/guides/extension/api-based-extension/README.mdx new file mode 100644 index 00000000..f4681cf8 --- /dev/null +++ b/en/guides/extension/api-based-extension/README.mdx @@ -0,0 +1,285 @@ +--- +title: API-Based Extension +--- + + +Developers can extend module capabilities through the API extension module. Currently supported module extensions include: + +* `moderation` +* `external_data_tool` + +Before extending module capabilities, prepare an API and an API Key for authentication, which can also be automatically generated by Dify. In addition to developing the corresponding module capabilities, follow the specifications below so that Dify can invoke the API correctly. + +![Add API Extension](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/extension/api-based-extension/ae289aeca0b29222a4e36820f76e3c5c.png) + +## API Specifications + +Dify will invoke your API according to the following specifications: + +``` +POST {Your-API-Endpoint} +``` + +### Header + + + + + + + + + + + + + + + + + + + + + +
HeaderValueDesc
`Content-Type`application/jsonThe request content is in JSON format.
`Authorization`Bearer {api\_key}The API Key is transmitted as a token. You need to parse the `api_key` and verify if it matches the provided API Key to ensure API security.
+### Request Body + +```JSON +{ + "point": string, // Extension point, different modules may contain multiple extension points + "params": { + ... // Parameters passed to each module's extension point + } +} +``` + +### API Response + +```JSON +{ + ... // For the content returned by the API, see the specific module's design specifications for different extension points. +} + +``` + +## Check + +When configuring API-based Extension in Dify, Dify will send a request to the API Endpoint to verify the availability of the API. When the API Endpoint receives `point=ping`, the API should return `result=pong`, as follows: + +### Header + +```JSON +Content-Type: application/json +Authorization: Bearer {api_key} +``` + +### Request Body + +```JSON +{ + "point": "ping" +} +``` + +### Expected API response + +```JSON +{ + "result": "pong" +} +``` + +\\ + +## For Example + +Here we take the external data tool as an example, where the scenario is to retrieve external weather information based on the region as context. + +### API Specifications + +`POST https://fake-domain.com/api/dify/receive` + +### **Header** + +```JSON +Content-Type: application/json +Authorization: Bearer 123456 +``` + +### **Request Body** + +```JSON +{ + "point": "app.external_data_tool.query", + "params": { + "app_id": "61248ab4-1125-45be-ae32-0ce91334d021", + "tool_variable": "weather_retrieve", + "inputs": { + "location": "London" + }, + "query": "How's the weather today?" + } +} +``` + +### **API Response** + +```JSON +{ + "result": "City: London\nTemperature: 10°C\nRealFeel®: 8°C\nAir Quality: Poor\nWind Direction: ENE\nWind Speed: 8 km/h\nWind Gusts: 14 km/h\nPrecipitation: Light rain" +} +``` + +### Code demo + +The code is based on the Python FastAPI framework. + +#### **Install dependencies.** + +
pip install 'fastapi[all]' uvicorn
+
+ +#### Write code according to the interface specifications. + +
from fastapi import FastAPI, Body, HTTPException, Header
+from pydantic import BaseModel
+
+app = FastAPI()
+
+
+class InputData(BaseModel):
+    point: str
+    params: dict
+
+
+@app.post("/api/dify/receive")
+async def dify_receive(data: InputData = Body(...), authorization: str = Header(None)):
+    """
+    Receive API query data from Dify.
+    """
+    expected_api_key = "123456"  # TODO Your API key of this API
+    auth_scheme, _, api_key = authorization.partition(' ')
+
+    if auth_scheme.lower() != "bearer" or api_key != expected_api_key:
+        raise HTTPException(status_code=401, detail="Unauthorized")
+
+    point = data.point
+
+    # for debug
+    print(f"point: {point}")
+
+    if point == "ping":
+        return {
+            "result": "pong"
+        }
+    if point == "app.external_data_tool.query":
+        return handle_app_external_data_tool_query(params=data.params)
+    # elif point == "{point name}":
+        # TODO other point implementation here
+
+    raise HTTPException(status_code=400, detail="Not implemented")
+
+
+def handle_app_external_data_tool_query(params: dict):
+    app_id = params.get("app_id")
+    tool_variable = params.get("tool_variable")
+    inputs = params.get("inputs")
+    query = params.get("query")
+
+    # for debug
+    print(f"app_id: {app_id}")
+    print(f"tool_variable: {tool_variable}")
+    print(f"inputs: {inputs}")
+    print(f"query: {query}")
+
+    # TODO your external data tool query implementation here, 
+    #  return must be a dict with key "result", and the value is the query result
+    if inputs.get("location") == "London":
+        return {
+            "result": "City: London\nTemperature: 10°C\nRealFeel®: 8°C\nAir Quality: Poor\nWind Direction: ENE\nWind "
+                      "Speed: 8 km/h\nWind Gusts: 14 km/h\nPrecipitation: Light rain"
+        }
+    else:
+        return {"result": "Unknown city"}
+
+ +#### Launch the API service. + +The default port is 8000. The complete address of the API is: `http://127.0.0.1:8000/api/dify/receive`with the configured API Key '123456'. + +
uvicorn main:app --reload --host 0.0.0.0
+
+ +#### Configure this API in Dify. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/extension/api-based-extension/ae289aeca0b29222a4e36820f76e3c5c.png) + +#### Select this API extension in the App. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/extension/api-based-extension/ae289aeca0b29222a4e36820f76e3c5c.png) + +When debugging the App, Dify will request the configured API and send the following content (example): + +```JSON +{ + "point": "app.external_data_tool.query", + "params": { + "app_id": "61248ab4-1125-45be-ae32-0ce91334d021", + "tool_variable": "weather_retrieve", + "inputs": { + "location": "London" + }, + "query": "How's the weather today?" + } +} +``` + +API Response: + +```JSON +{ + "result": "City: London\nTemperature: 10°C\nRealFeel®: 8°C\nAir Quality: Poor\nWind Direction: ENE\nWind Speed: 8 km/h\nWind Gusts: 14 km/h\nPrecipitation: Light rain" +} +``` + +### Local debugging + +Since Dify's cloud version can't access internal network API services, you can use Ngrok to expose your local API service endpoint to the public internet for cloud-based debugging of local code. The steps are: + +1. Visit the Ngrok official website at [https://ngrok.com](https://ngrok.com/), register, and download the Ngrok file. + +![](../../../.gitbook/assets/spaces_CdDIVDY6AtAz028MFT4d_uploads_kLpE7vN8jg1KrzeCWZtn_download.webp) + +2. After downloading, go to the download directory. Unzip the package and run the initialization script as instructed: + +``` +$ unzip /path/to/ngrok.zip +$ ./ngrok config add-authtoken 你的Token +``` + +3. Check the port of your local API service. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/extension/api-based-extension/5d3cf23074d5411ffd8571de28a07774.webp) + +Run the following command to start: + +``` +$ ./ngrok http [port number] +``` + +Upon successful startup, you'll see something like the following: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/extension/api-based-extension/4357f9c0dc1e494b3718b56ae9253e6a.jpeg) + +4. Find the 'Forwarding' address, like the sample domain `https://177e-159-223-41-52.ngrok-free.app`, and use it as your public domain. + +* For example, to expose your locally running service, replace the example URL `http://127.0.0.1:8000/api/dify/receive` with `https://177e-159-223-41-52.ngrok-free.app/api/dify/receive`. + +Now, this API endpoint is accessible publicly. You can configure this endpoint in Dify for local debugging. For the configuration steps, consult the appropriate documentation or guide. + +### Deploy API extension with Cloudflare Workers + +We recommend that you use Cloudflare Workers to deploy your API extension, because Cloudflare Workers can easily provide a public address and can be used for free. + +[cloudflare-workers.md](cloudflare-workers.md "mention") diff --git a/en/guides/extension/api-based-extension/cloudflare-workers.mdx b/en/guides/extension/api-based-extension/cloudflare-workers.mdx new file mode 100644 index 00000000..5dade47d --- /dev/null +++ b/en/guides/extension/api-based-extension/cloudflare-workers.mdx @@ -0,0 +1,111 @@ +--- +title: Deploy API Tools with Cloudflare Workers +--- + + +## Getting Started + +Since the Dify API Extension requires a publicly accessible internet address as an API Endpoint, we need to deploy our API extension to a public internet address. Here, we use Cloudflare Workers for deploying our API extension. + +We clone the [Example GitHub Repository](https://github.com/crazywoola/dify-extension-workers), which contains a simple API extension. We can modify this as a base. + +```bash +git clone https://github.com/crazywoola/dify-extension-workers.git +cp wrangler.toml.example wrangler.toml +``` + +Open the `wrangler.toml` file, and modify `name` and `compatibility_date` to your application's name and compatibility date. + +An important configuration here is the `TOKEN` in `vars`, which you will need to provide when adding the API extension in Dify. For security reasons, it's recommended to use a random string as the Token. You should not write the Token directly in the source code but pass it via environment variables. Thus, do not commit your wrangler.toml to your code repository. + +```toml +name = "dify-extension-example" +compatibility_date = "2023-01-01" + +[vars] +TOKEN = "bananaiscool" +``` + +This API extension returns a random Breaking Bad quote. You can modify the logic of this API extension in `src/index.ts`. This example shows how to interact with a third-party API. + +```typescript +// ⬇️ implement your logic here ⬇️ +// point === "app.external_data_tool.query" +// https://api.breakingbadquotes.xyz/v1/quotes +const count = params?.inputs?.count ?? 1; +const url = `https://api.breakingbadquotes.xyz/v1/quotes/${count}`; +const result = await fetch(url).then(res => res.text()) +// ⬆️ implement your logic here ⬆️ +``` + +This repository simplifies all configurations except for business logic. You can directly use `npm` commands to deploy your API extension. + +```bash +npm run deploy +``` + +After successful deployment, you will get a public internet address, which you can add in Dify as an API Endpoint. Please note not to miss the `endpoint` path. + +
+ + +
+ +## Other Logic TL;DR + +### About Bearer Auth + +```typescript +import { bearerAuth } from "hono/bearer-auth"; + +(c, next) => { + const auth = bearerAuth({ token: c.env.TOKEN }); + return auth(c, next); +}, +``` + +Our Bearer authentication logic is as shown above. We use the `hono/bearer-auth` package for Bearer authentication. You can use `c.env.TOKEN` in `src/index.ts` to get the Token. + +### About Parameter Validation + +```typescript +import { z } from "zod"; +import { zValidator } from "@hono/zod-validator"; + +const schema = z.object({ + point: z.union([ + z.literal("ping"), + z.literal("app.external_data_tool.query"), + ]), // Restricts 'point' to two specific values + params: z + .object({ + app_id: z.string().optional(), + tool_variable: z.string().optional(), + inputs: z.record(z.any()).optional(), + query: z.any().optional(), // string or null + }) + .optional(), +}); +``` + +We use `zod` to define the types of parameters. You can use `zValidator` in `src/index.ts` for parameter validation. Get validated parameters through `const { point, params } = c.req.valid("json");`. Our point has only two values, so we use `z.union` for definition. `params` is an optional parameter, defined with `z.optional`. It includes a `inputs` parameter, a `Record` type representing an object with string keys and any values. This type can represent any object. You can get the `count` parameter in `src/index.ts` using `params?.inputs?.count`. + +### Accessing Logs of Cloudflare Workers + +```bash +wrangler tail +``` + +## Reference Content + +* [Cloudflare Workers](https://workers.cloudflare.com/) +* [Cloudflare Workers CLI](https://developers.cloudflare.com/workers/cli-wrangler/install-update) +* [Example GitHub Repository](https://github.com/crazywoola/dify-extension-workers) diff --git a/en/guides/extension/api-based-extension/external-data-tool.mdx b/en/guides/extension/api-based-extension/external-data-tool.mdx new file mode 100644 index 00000000..3f6da5c1 --- /dev/null +++ b/en/guides/extension/api-based-extension/external-data-tool.mdx @@ -0,0 +1,196 @@ +--- +title: External Data Tools +--- + + +External data tools are used to fetch additional data from external sources after the end user submits data, and then assemble this data into prompts as additional context information for the LLM. Dify provides a default tool for external API calls, check [External Data Tool](https://docs.dify.ai/guides/knowledge-base/external-data-tool) for details. + +For developers deploying Dify locally, to meet more customized needs or to avoid developing an additional API Server, you can directly insert custom external data tool logic in the form of a plugin based on the Dify service. After extending custom tools, your custom tool options will be added to the dropdown list of tool types, and team members can use these custom tools to fetch external data. + +## Quick Start + +Here is an example of extending an external data tool for `Weather Search`, with the following steps: + +1. Initialize the directory +2. Add frontend form specifications +3. Add implementation class +4. Preview the frontend interface +5. Debug the extension + +### 1. **Initialize the Directory** + +To add a custom type `Weather Search`, you need to create the relevant directory and files under `api/core/external_data_tool`. + +```python +. +└── api + └── core + └── external_data_tool + └── weather_search + ├── __init__.py + ├── weather_search.py + └── schema.json +``` + +### 2. **Add Frontend Component Specifications** + +* `schema.json`, which defines the frontend component specifications, detailed in [.](./ "mention") + +```json +{ + "label": { + "en-US": "Weather Search", + "zh-Hans": "天气查询" + }, + "form_schema": [ + { + "type": "select", + "label": { + "en-US": "Temperature Unit", + "zh-Hans": "温度单位" + }, + "variable": "temperature_unit", + "required": true, + "options": [ + { + "label": { + "en-US": "Fahrenheit", + "zh-Hans": "华氏度" + }, + "value": "fahrenheit" + }, + { + "label": { + "en-US": "Centigrade", + "zh-Hans": "摄氏度" + }, + "value": "centigrade" + } + ], + "default": "centigrade", + "placeholder": "Please select temperature unit" + } + ] +} +``` + +### 3. Add Implementation Class + +`weather_search.py` code template, where you can implement the specific business logic. + + +Note: The class variable `name` must be the custom type name, consistent with the directory and file name, and must be unique. + + +```python +from typing import Optional + +from core.external_data_tool.base import ExternalDataTool + + +class WeatherSearch(ExternalDataTool): + """ + The name of custom type must be unique, keep the same with directory and file name. + """ + name: str = "weather_search" + + @classmethod + def validate_config(cls, tenant_id: str, config: dict) -> None: + """ + schema.json validation. It will be called when user save the config. + + Example: + .. code-block:: python + config = { + "temperature_unit": "centigrade" + } + + :param tenant_id: the id of workspace + :param config: the variables of form config + :return: + """ + + if not config.get('temperature_unit'): + raise ValueError('temperature unit is required') + + def query(self, inputs: dict, query: Optional[str] = None) -> str: + """ + Query the external data tool. + + :param inputs: user inputs + :param query: the query of chat app + :return: the tool query result + """ + city = inputs.get('city') + temperature_unit = self.config.get('temperature_unit') + + if temperature_unit == 'fahrenheit': + return f'Weather in {city} is 32°F' + else: + return f'Weather in {city} is 0°C' +``` + + + +### 4. **Debug the Extension** + +Now, you can select the custom `Weather Search` external data tool extension type in the Dify application orchestration interface for debugging. + +## Implementation Class Template + +```python +from typing import Optional + +from core.external_data_tool.base import ExternalDataTool + + +class WeatherSearch(ExternalDataTool): + """ + The name of custom type must be unique, keep the same with directory and file name. + """ + name: str = "weather_search" + + @classmethod + def validate_config(cls, tenant_id: str, config: dict) -> None: + """ + schema.json validation. It will be called when user save the config. + + :param tenant_id: the id of workspace + :param config: the variables of form config + :return: + """ + + # implement your own logic here + + def query(self, inputs: dict, query: Optional[str] = None) -> str: + """ + Query the external data tool. + + :param inputs: user inputs + :param query: the query of chat app + :return: the tool query result + """ + + # implement your own logic here + return "your own data." +``` + +### Detailed Introduction to Implementation Class Development + +### def validate_config + +`schema.json` form validation method, called when the user clicks "Publish" to save the configuration. + +* `config` form parameters + * `{{variable}}` custom form variables + +### def query + +User-defined data query implementation, the returned result will be replaced into the specified variable. + +* `inputs`: Variables passed by the end user +* `query`: Current conversation input content from the end user, a fixed parameter for conversational applications. \ No newline at end of file diff --git a/en/guides/extension/api-based-extension/moderation-extension.mdx b/en/guides/extension/api-based-extension/moderation-extension.mdx new file mode 100644 index 00000000..1a7eb9e7 --- /dev/null +++ b/en/guides/extension/api-based-extension/moderation-extension.mdx @@ -0,0 +1,151 @@ +--- +title: Moderation +--- + + +This module is used to review the content input by end-users and the output from LLMs within the application, divided into two types of extension points. + +Please read [.](./ "mention") to complete the development and integration of basic API service capabilities. + +## Extension Point + +`app.moderation.input`: End-user input content review extension point. It is used to review the content of variables passed in by end-users and the input content of dialogues in conversational applications. + +`app.moderation.output`: LLM output content review extension point. It is used to review the content output by LLM. When the LLM output is streaming, the content will be requested by the API in chunks of 100 characters to avoid delays in review when the output content is lengthy. + +### `app.moderation.input` + +#### Request Body + +```json +{ + "point": "app.moderation.input", + "app_id": string, + "inputs": { + "var_1": "value_1", + "var_2": "value_2", + ... + }, + "query": string | null + } +} +``` + +* Example + +```json +{ + "point": "app.moderation.input", + "params": { + "app_id": "61248ab4-1125-45be-ae32-0ce91334d021", + "inputs": { + "var_1": "I will kill you.", + "var_2": "I will fuck you." + }, + "query": "Happy everydays." + } +} +``` + +#### API Response + +```json +{ + "flagged": bool, + "action": string, + "preset_response": string, + "inputs": { + "var_1": "value_1", + "var_2": "value_2", + ... + }, + "query": string | null +} +``` + +* Example + +`action=direct_output` + +```json +{ + "flagged": true, + "action": "direct_output", + "preset_response": "Your content violates our usage policy." +} +``` + +`action=overridden` + +``` +{ + "flagged": true, + "action": "overridden", + "inputs": { + "var_1": "I will *** you.", + "var_2": "I will *** you." + }, + "query": "Happy everydays." +} +``` + +### `app.moderation.output` + +#### Request Body + +```JSON +{ + "point": "app.moderation.output", + "params": { + "app_id": string, + "text": string + } +} +``` + +* Example + + + + ```JSON + { + "point": "app.moderation.output", + "params": { + "app_id": "61248ab4-1125-45be-ae32-0ce91334d021", + "text": "I will kill you." + } + } + ``` + +#### API Response + +```JSON +{ + "flagged": bool, + "action": string, + "preset_response": string, + "text": string +``` + +* Example + +`action=direct_output` + +* ```JSON + { + "flagged": true, + "action": "direct_output", + "preset_response": "Your content violates our usage policy." + } + ``` + +`action=overridden` + +* ```JSON + { + "flagged": true, + "action": "overridden", + "text": "I will *** you." + } + ``` + diff --git a/en/guides/extension/api-based-extension/moderation.mdx b/en/guides/extension/api-based-extension/moderation.mdx new file mode 100644 index 00000000..5839fdd3 --- /dev/null +++ b/en/guides/extension/api-based-extension/moderation.mdx @@ -0,0 +1,140 @@ +--- +title: Sensitive Content Moderation +--- + + +This module is used to review the content input by end-users and the output content of the LLM within the application. It is divided into two types of extension points. + +### Extension Points + +* `app.moderation.input` - Extension point for reviewing end-user input content + * Used to review the variable content passed in by end-users and the input content of conversational applications. +* `app.moderation.output` - Extension point for reviewing LLM output content + * Used to review the content output by the LLM. + * When the LLM output is streamed, the content will be segmented into 100-character blocks for API requests to avoid delays in reviewing longer outputs. + +### app.moderation.input Extension Point + +#### Request Body + +``` +{ + "point": "app.moderation.input", // Extension point type, fixed as app.moderation.input here + "params": { + "app_id": string, // Application ID + "inputs": { // Variable values passed in by end-users, key is the variable name, value is the variable value + "var_1": "value_1", + "var_2": "value_2", + ... + }, + "query": string | null // Current dialogue input content from the end-user, fixed parameter for conversational applications. + } +} +``` + +* Example + * ``` + { + "point": "app.moderation.input", + "params": { + "app_id": "61248ab4-1125-45be-ae32-0ce91334d021", + "inputs": { + "var_1": "I will kill you.", + "var_2": "I will fuck you." + }, + "query": "Happy everydays." + } + } + ``` + +#### API Response + +``` +{ + "flagged": bool, // Whether it violates the moderation rules + "action": string, // Action to take, direct_output for directly outputting a preset response; overridden for overriding the input variable values + "preset_response": string, // Preset response (returned only when action=direct_output) + "inputs": { // Variable values passed in by end-users, key is the variable name, value is the variable value (returned only when action=overridden) + "var_1": "value_1", + "var_2": "value_2", + ... + }, + "query": string | null // Overridden current dialogue input content from the end-user, fixed parameter for conversational applications. (returned only when action=overridden) +} +``` + +* Example + * `action=direct_output` + * ``` + { + "flagged": true, + "action": "direct_output", + "preset_response": "Your content violates our usage policy." + } + ``` + * `action=overridden` + * ``` + { + "flagged": true, + "action": "overridden", + "inputs": { + "var_1": "I will *** you.", + "var_2": "I will *** you." + }, + "query": "Happy everydays." + } + ``` + +### app.moderation.output Extension Point + +#### Request Body + +``` +{ + "point": "app.moderation.output", // Extension point type, fixed as app.moderation.output here + "params": { + "app_id": string, // Application ID + "text": string // LLM response content. When the LLM output is streamed, this will be content segmented into 100-character blocks. + } +} +``` + +* Example + * ``` + { + "point": "app.moderation.output", + "params": { + "app_id": "61248ab4-1125-45be-ae32-0ce91334d021", + "text": "I will kill you." + } + } + ``` + +#### API Response + +``` +{ + "flagged": bool, // Whether it violates the moderation rules + "action": string, // Action to take, direct_output for directly outputting a preset response; overridden for overriding the input variable values + "preset_response": string, // Preset response (returned only when action=direct_output) + "text": string // Overridden LLM response content (returned only when action=overridden) +} +``` + +* Example + * `action=direct_output` + * ``` + { + "flagged": true, + "action": "direct_output", + "preset_response": "Your content violates our usage policy." + } + ``` + * `action=overridden` + * ``` + { + "flagged": true, + "action": "overridden", + "text": "I will *** you." + } + ``` \ No newline at end of file diff --git a/en/guides/extension/code-based-extension/README.mdx b/en/guides/extension/code-based-extension/README.mdx new file mode 100644 index 00000000..95ed8ed7 --- /dev/null +++ b/en/guides/extension/code-based-extension/README.mdx @@ -0,0 +1,101 @@ +--- +title: Code Based Extensions +--- + + +For developers deploying Dify locally, if you want to implement extension capabilities without rewriting an API service, you can use code extensions. This allows you to extend or enhance the functionality of the program in code form (i.e., plugin capability) without disrupting the original code logic of Dify. It follows certain interfaces or specifications to achieve compatibility and plug-and-play capability with the main program. Currently, Dify offers two types of code extensions: + +* Adding a new type of external data tool [External Data Tool](https://docs.dify.ai/guides/extension/api-based-extension/external-data-tool) +* Extending sensitive content moderation strategies [Moderation](https://docs.dify.ai/guides/extension/api-based-extension/moderation) + +Based on the above functionalities, you can achieve horizontal expansion by following the code-level interface specifications. If you are willing to contribute your extensions to us, we warmly welcome you to submit a PR to Dify. + +## Frontend Component Specification Definition + +The frontend styles of code extensions are defined through `schema.json`: + +* label: Custom type name, supporting system language switching +* form_schema: List of form contents + * type: Component type + * select: Dropdown options + * text-input: Text + * paragraph: Paragraph + * label: Component name, supporting system language switching + * variable: Variable name + * required: Whether it is required + * default: Default value + * placeholder: Component hint content + * options: Exclusive property for the "select" component, defining the dropdown contents + * label: Dropdown name, supporting system language switching + * value: Dropdown option value + * max_length: Exclusive property for the "text-input" component, maximum length + +### Template Example + +```json +{ + "label": { + "en-US": "Cloud Service", + "zh-Hans": "云服务" + }, + "form_schema": [ + { + "type": "select", + "label": { + "en-US": "Cloud Provider", + "zh-Hans": "云厂商" + }, + "variable": "cloud_provider", + "required": true, + "options": [ + { + "label": { + "en-US": "AWS", + "zh-Hans": "亚马逊" + }, + "value": "AWS" + }, + { + "label": { + "en-US": "Google Cloud", + "zh-Hans": "谷歌云" + }, + "value": "GoogleCloud" + }, + { + "label": { + "en-US": "Azure Cloud", + "zh-Hans": "微软云" + }, + "value": "Azure" + } + ], + "default": "GoogleCloud", + "placeholder": "" + }, + { + "type": "text-input", + "label": { + "en-US": "API Endpoint", + "zh-Hans": "API Endpoint" + }, + "variable": "api_endpoint", + "required": true, + "max_length": 100, + "default": "", + "placeholder": "https://api.example.com" + }, + { + "type": "paragraph", + "label": { + "en-US": "API Key", + "zh-Hans": "API Key" + }, + "variable": "api_keys", + "required": true, + "default": "", + "placeholder": "Paste your API key here" + } + ] +} +``` \ No newline at end of file diff --git a/en/guides/extension/code-based-extension/external-data-tool.mdx b/en/guides/extension/code-based-extension/external-data-tool.mdx new file mode 100644 index 00000000..0f6dea38 --- /dev/null +++ b/en/guides/extension/code-based-extension/external-data-tool.mdx @@ -0,0 +1,196 @@ +--- +title: External Data Tools +--- + + +External data tools are used to fetch additional data from external sources after the end user submits data, and then assemble this data into prompts as additional context information for the LLM. Dify provides a default tool for external API calls, check [api-based-extension](../api-based-extension/ "mention") for details. + +For developers deploying Dify locally, to meet more customized needs or to avoid developing an additional API Server, you can directly insert custom external data tool logic in the form of a plugin based on the Dify service. After extending custom tools, your custom tool options will be added to the dropdown list of tool types, and team members can use these custom tools to fetch external data. + +## Quick Start + +Here is an example of extending an external data tool for `Weather Search`, with the following steps: + +1. Initialize the directory +2. Add frontend form specifications +3. Add implementation class +4. Preview the frontend interface +5. Debug the extension + +### 1. **Initialize the Directory** + +To add a custom type `Weather Search`, you need to create the relevant directory and files under `api/core/external_data_tool`. + +```python +. +└── api + └── core + └── external_data_tool + └── weather_search + ├── __init__.py + ├── weather_search.py + └── schema.json +``` + +### 2. **Add Frontend Component Specifications** + +* `schema.json`, which defines the frontend component specifications, detailed in [.](./ "mention") + +```json +{ + "label": { + "en-US": "Weather Search", + "zh-Hans": "天气查询" + }, + "form_schema": [ + { + "type": "select", + "label": { + "en-US": "Temperature Unit", + "zh-Hans": "温度单位" + }, + "variable": "temperature_unit", + "required": true, + "options": [ + { + "label": { + "en-US": "Fahrenheit", + "zh-Hans": "华氏度" + }, + "value": "fahrenheit" + }, + { + "label": { + "en-US": "Centigrade", + "zh-Hans": "摄氏度" + }, + "value": "centigrade" + } + ], + "default": "centigrade", + "placeholder": "Please select temperature unit" + } + ] +} +``` + +### 3. Add Implementation Class + +`weather_search.py` code template, where you can implement the specific business logic. + + +Note: The class variable `name` must be the custom type name, consistent with the directory and file name, and must be unique. + + +```python +from typing import Optional + +from core.external_data_tool.base import ExternalDataTool + + +class WeatherSearch(ExternalDataTool): + """ + The name of custom type must be unique, keep the same with directory and file name. + """ + name: str = "weather_search" + + @classmethod + def validate_config(cls, tenant_id: str, config: dict) -> None: + """ + schema.json validation. It will be called when user save the config. + + Example: + .. code-block:: python + config = { + "temperature_unit": "centigrade" + } + + :param tenant_id: the id of workspace + :param config: the variables of form config + :return: + """ + + if not config.get('temperature_unit'): + raise ValueError('temperature unit is required') + + def query(self, inputs: dict, query: Optional[str] = None) -> str: + """ + Query the external data tool. + + :param inputs: user inputs + :param query: the query of chat app + :return: the tool query result + """ + city = inputs.get('city') + temperature_unit = self.config.get('temperature_unit') + + if temperature_unit == 'fahrenheit': + return f'Weather in {city} is 32°F' + else: + return f'Weather in {city} is 0°C' +``` + + + + + +### 4. **Debug the Extension** + +Now, you can select the custom `Weather Search` external data tool extension type in the Dify application orchestration interface for debugging. + +## Implementation Class Template + +```python +from typing import Optional + +from core.external_data_tool.base import ExternalDataTool + + +class WeatherSearch(ExternalDataTool): + """ + The name of custom type must be unique, keep the same with directory and file name. + """ + name: str = "weather_search" + + @classmethod + def validate_config(cls, tenant_id: str, config: dict) -> None: + """ + schema.json validation. It will be called when user save the config. + + :param tenant_id: the id of workspace + :param config: the variables of form config + :return: + """ + + # implement your own logic here + + def query(self, inputs: dict, query: Optional[str] = None) -> str: + """ + Query the external data tool. + + :param inputs: user inputs + :param query: the query of chat app + :return: the tool query result + """ + + # implement your own logic here + return "your own data." +``` + +### Detailed Introduction to Implementation Class Development + +### def validate_config + +`schema.json` form validation method, called when the user clicks "Publish" to save the configuration. + +* `config` form parameters + * `{{variable}}` custom form variables + +### def query + +User-defined data query implementation, the returned result will be replaced into the specified variable. + +* `inputs`: Variables passed by the end user +* `query`: Current conversation input content from the end user, a fixed parameter for conversational applications. \ No newline at end of file diff --git a/en/guides/extension/code-based-extension/moderation.mdx b/en/guides/extension/code-based-extension/moderation.mdx new file mode 100644 index 00000000..7a16678b --- /dev/null +++ b/en/guides/extension/code-based-extension/moderation.mdx @@ -0,0 +1,317 @@ +--- +title: Sensitive Content Moderation +--- + + +In addition to the system's built-in content moderation types, Dify also supports user-defined content moderation rules. This method is suitable for developers customizing their own private deployments. For instance, in an enterprise internal customer service setup, it may be required that users, while querying or customer service agents while responding, not only avoid entering words related to violence, sex, and illegal activities but also avoid specific terms forbidden by the enterprise or violating internally established moderation logic. Developers can extend custom content moderation rules at the code level in a private deployment of Dify. + +## Quick Start + +Here is an example of extending a `Cloud Service` content moderation type, with the steps as follows: + +1. Initialize the directory +2. Add the frontend component definition file +3. Add the implementation class +4. Preview the frontend interface +5. Debug the extension + +### 1. Initialize the Directory + +To add a custom type `Cloud Service`, create the relevant directories and files under the `api/core/moderation` directory. + +```Plain +. +└── api + └── core + └── moderation + └── cloud_service + ├── __init__.py + ├── cloud_service.py + └── schema.json +``` + +### 2. Add Frontend Component Specifications + +* `schema.json`: This file defines the frontend component specifications. For details, see [.](./ "mention"). + +```json +{ + "label": { + "en-US": "Cloud Service", + "zh-Hans": "云服务" + }, + "form_schema": [ + { + "type": "select", + "label": { + "en-US": "Cloud Provider", + "zh-Hans": "云厂商" + }, + "variable": "cloud_provider", + "required": true, + "options": [ + { + "label": { + "en-US": "AWS", + "zh-Hans": "亚马逊" + }, + "value": "AWS" + }, + { + "label": { + "en-US": "Google Cloud", + "zh-Hans": "谷歌云" + }, + "value": "GoogleCloud" + }, + { + "label": { + "en-US": "Azure Cloud", + "zh-Hans": "微软云" + }, + "value": "Azure" + } + ], + "default": "GoogleCloud", + "placeholder": "" + }, + { + "type": "text-input", + "label": { + "en-US": "API Endpoint", + "zh-Hans": "API Endpoint" + }, + "variable": "api_endpoint", + "required": true, + "max_length": 100, + "default": "", + "placeholder": "https://api.example.com" + }, + { + "type": "paragraph", + "label": { + "en-US": "API Key", + "zh-Hans": "API Key" + }, + "variable": "api_keys", + "required": true, + "default": "", + "placeholder": "Paste your API key here" + } + ] +} +``` + +### 3. Add Implementation Class + +`cloud_service.py` code template where you can implement specific business logic. + + +Note: The class variable name must be the same as the custom type name, matching the directory and file names, and must be unique. + + +```python +from core.moderation.base import Moderation, ModerationAction, ModerationInputsResult, ModerationOutputsResult + +class CloudServiceModeration(Moderation): + """ + The name of custom type must be unique, keep the same with directory and file name. + """ + name: str = "cloud_service" + + @classmethod + def validate_config(cls, tenant_id: str, config: dict) -> None: + """ + schema.json validation. It will be called when user saves the config. + + Example: + .. code-block:: python + config = { + "cloud_provider": "GoogleCloud", + "api_endpoint": "https://api.example.com", + "api_keys": "123456", + "inputs_config": { + "enabled": True, + "preset_response": "Your content violates our usage policy. Please revise and try again." + }, + "outputs_config": { + "enabled": True, + "preset_response": "Your content violates our usage policy. Please revise and try again." + } + } + + :param tenant_id: the id of workspace + :param config: the variables of form config + :return: + """ + + cls._validate_inputs_and_outputs_config(config, True) + + if not config.get("cloud_provider"): + raise ValueError("cloud_provider is required") + + if not config.get("api_endpoint"): + raise ValueError("api_endpoint is required") + + if not config.get("api_keys"): + raise ValueError("api_keys is required") + + def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + """ + Moderation for inputs. + + :param inputs: user inputs + :param query: the query of chat app, there is empty if is completion app + :return: the moderation result + """ + flagged = False + preset_response = "" + + if self.config['inputs_config']['enabled']: + preset_response = self.config['inputs_config']['preset_response'] + + if query: + inputs['query__'] = query + flagged = self._is_violated(inputs) + + # return ModerationInputsResult(flagged=flagged, action=ModerationAction.overridden, inputs=inputs, query=query) + return ModerationInputsResult(flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response) + + def moderation_for_outputs(self, text: str) -> ModerationOutputsResult: + """ + Moderation for outputs. + + :param text: the text of LLM response + :return: the moderation result + """ + flagged = False + preset_response = "" + + if self.config['outputs_config']['enabled']: + preset_response = self.config['outputs_config']['preset_response'] + + flagged = self._is_violated({'text': text}) + + # return ModerationOutputsResult(flagged=flagged, action=ModerationAction.overridden, text=text) + return ModerationOutputsResult(flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response) + + def _is_violated(self, inputs: dict): + """ + The main logic of moderation. + + :param inputs: + :return: the moderation result + """ + return False +``` + + + + + +### 4. Debug the Extension + +At this point, you can select the custom `Cloud Service` content moderation extension type for debugging in the Dify application orchestration interface. + +## Implementation Class Template + +```python +from core.moderation.base import Moderation, ModerationAction, ModerationInputsResult, ModerationOutputsResult + +class CloudServiceModeration(Moderation): + """ + The name of custom type must be unique, keep the same with directory and file name. + """ + name: str = "cloud_service" + + @classmethod + def validate_config(cls, tenant_id: str, config: dict) -> None: + """ + schema.json validation. It will be called when user saves the config. + + :param tenant_id: the id of workspace + :param config: the variables of form config + :return: + """ + cls._validate_inputs_and_outputs_config(config, True) + + # implement your own logic here + + def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + """ + Moderation for inputs. + + :param inputs: user inputs + :param query: the query of chat app, there is empty if is completion app + :return: the moderation result + """ + flagged = False + preset_response = "" + + # implement your own logic here + + # return ModerationInputsResult(flagged=flagged, action=ModerationAction.overridden, inputs=inputs, query=query) + return ModerationInputsResult(flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response) + + def moderation_for_outputs(self, text: str) -> ModerationOutputsResult: + """ + Moderation for outputs. + + :param text: the text of LLM response + :return: the moderation result + """ + flagged = False + preset_response = "" + + # implement your own logic here + + # return ModerationOutputsResult(flagged=flagged, action=ModerationAction.overridden, text=text) + return ModerationOutputsResult(flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response) +``` + +## Detailed Introduction to Implementation Class Development + +### def validate\_config + +The `schema.json` form validation method is called when the user clicks "Publish" to save the configuration. + +* `config` form parameters + * `{{variable}}` custom variable of the form + * `inputs_config` input moderation preset response + * `enabled` whether it is enabled + * `preset_response` input preset response + * `outputs_config` output moderation preset response + * `enabled` whether it is enabled + * `preset_response` output preset response + +### def moderation\_for\_inputs + +Input validation function + +* `inputs`: values passed by the end user +* `query`: the current input content of the end user in a conversation, a fixed parameter for conversational applications. +* `ModerationInputsResult` + * `flagged`: whether it violates the moderation rules + * `action`: action to be taken + * `direct_output`: directly output the preset response + * `overridden`: override the passed variable values + * `preset_response`: preset response (returned only when action=direct_output) + * `inputs`: values passed by the end user, with key as the variable name and value as the variable value (returned only when action=overridden) + * `query`: overridden current input content of the end user in a conversation, a fixed parameter for conversational applications (returned only when action=overridden) + +### def moderation\_for\_outputs + +Output validation function + +* `text`: content output by the model +* `moderation_for_outputs`: output validation function + * `text`: content of the LLM response. When the LLM output is streamed, this is the content in segments of 100 characters. + * `ModerationOutputsResult` + * `flagged`: whether it violates the moderation rules + * `action`: action to be taken + * `direct_output`: directly output the preset response + * `overridden`: override the passed variable values + * `preset_response`: preset response (returned only when action=direct_output) + * `text`: overridden content of the LLM response (returned only when action=overridden). \ No newline at end of file diff --git a/en/user-guide/knowledge-base/api-documentation/external-knowledge-api-documentation.mdx b/en/guides/knowledge-base/api-documentation/external-knowledge-api-documentation.mdx similarity index 100% rename from en/user-guide/knowledge-base/api-documentation/external-knowledge-api-documentation.mdx rename to en/guides/knowledge-base/api-documentation/external-knowledge-api-documentation.mdx diff --git a/en/user-guide/knowledge-base/api-documentation/external-knowledge-api.mdx b/en/guides/knowledge-base/api-documentation/external-knowledge-api.mdx similarity index 100% rename from en/user-guide/knowledge-base/api-documentation/external-knowledge-api.mdx rename to en/guides/knowledge-base/api-documentation/external-knowledge-api.mdx diff --git a/en/user-guide/knowledge-base/connect-external-knowledge-base.mdx b/en/guides/knowledge-base/connect-external-knowledge-base.mdx similarity index 100% rename from en/user-guide/knowledge-base/connect-external-knowledge-base.mdx rename to en/guides/knowledge-base/connect-external-knowledge-base.mdx diff --git a/en/user-guide/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.mdx b/en/guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.mdx similarity index 100% rename from en/user-guide/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.mdx rename to en/guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.mdx diff --git a/en/user-guide/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme.mdx b/en/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme.mdx similarity index 100% rename from en/user-guide/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme.mdx rename to en/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme.mdx diff --git a/en/user-guide/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.mdx b/en/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.mdx similarity index 100% rename from en/user-guide/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.mdx rename to en/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.mdx diff --git a/en/user-guide/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.mdx b/en/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.mdx similarity index 100% rename from en/user-guide/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.mdx rename to en/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.mdx diff --git a/en/user-guide/knowledge-base/create-knowledge-and-upload-documents/readme.mdx b/en/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx similarity index 96% rename from en/user-guide/knowledge-base/create-knowledge-and-upload-documents/readme.mdx rename to en/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx index 9b5d3bf6..67c9e6ef 100644 --- a/en/user-guide/knowledge-base/create-knowledge-and-upload-documents/readme.mdx +++ b/en/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx @@ -34,7 +34,7 @@ title: 知识库创建步骤 在 RAG 的生产级应用中,为了获得更好的数据召回效果,需要对多源数据进行预处理和清洗,即 ETL (_extract, transform, load_)。为了增强非结构化/半结构化数据的预处理能力,Dify 支持了可选的 ETL 方案:**Dify ETL** 和[ ](https://docs.unstructured.io/welcome)[**Unstructured ETL** ](https://unstructured.io/)。Unstructured 能够高效地提取并转换你的数据为干净的数据用于后续的步骤。Dify 各版本的 ETL 方案选择: * SaaS 版不可选,默认使用 Unstructured ETL; -* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; +* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; 文件解析支持格式的差异: diff --git a/en/user-guide/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.mdx b/en/guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.mdx similarity index 100% rename from en/user-guide/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.mdx rename to en/guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.mdx diff --git a/en/user-guide/knowledge-base/external-knowledge-api.mdx b/en/guides/knowledge-base/external-knowledge-api.mdx similarity index 100% rename from en/user-guide/knowledge-base/external-knowledge-api.mdx rename to en/guides/knowledge-base/external-knowledge-api.mdx diff --git a/en/user-guide/knowledge-base/faq.mdx b/en/guides/knowledge-base/faq.mdx similarity index 100% rename from en/user-guide/knowledge-base/faq.mdx rename to en/guides/knowledge-base/faq.mdx diff --git a/en/user-guide/knowledge-base/indexing-and-retrieval/hybrid-search.mdx b/en/guides/knowledge-base/indexing-and-retrieval/hybrid-search.mdx similarity index 100% rename from en/user-guide/knowledge-base/indexing-and-retrieval/hybrid-search.mdx rename to en/guides/knowledge-base/indexing-and-retrieval/hybrid-search.mdx diff --git a/en/user-guide/knowledge-base/indexing-and-retrieval/rerank.mdx b/en/guides/knowledge-base/indexing-and-retrieval/rerank.mdx similarity index 100% rename from en/user-guide/knowledge-base/indexing-and-retrieval/rerank.mdx rename to en/guides/knowledge-base/indexing-and-retrieval/rerank.mdx diff --git a/en/user-guide/knowledge-base/indexing-and-retrieval/retrieval-augment.mdx b/en/guides/knowledge-base/indexing-and-retrieval/retrieval-augment.mdx similarity index 100% rename from en/user-guide/knowledge-base/indexing-and-retrieval/retrieval-augment.mdx rename to en/guides/knowledge-base/indexing-and-retrieval/retrieval-augment.mdx diff --git a/en/user-guide/knowledge-base/indexing-and-retrieval/retrieval.mdx b/en/guides/knowledge-base/indexing-and-retrieval/retrieval.mdx similarity index 100% rename from en/user-guide/knowledge-base/indexing-and-retrieval/retrieval.mdx rename to en/guides/knowledge-base/indexing-and-retrieval/retrieval.mdx diff --git a/en/user-guide/knowledge-base/integrate-knowledge-within-application.mdx b/en/guides/knowledge-base/integrate-knowledge-within-application.mdx similarity index 100% rename from en/user-guide/knowledge-base/integrate-knowledge-within-application.mdx rename to en/guides/knowledge-base/integrate-knowledge-within-application.mdx diff --git a/en/user-guide/knowledge-base/knowledge-and-documents-maintenance/external-knowledge-api.mdx b/en/guides/knowledge-base/knowledge-and-documents-maintenance/external-knowledge-api.mdx similarity index 100% rename from en/user-guide/knowledge-base/knowledge-and-documents-maintenance/external-knowledge-api.mdx rename to en/guides/knowledge-base/knowledge-and-documents-maintenance/external-knowledge-api.mdx diff --git a/en/user-guide/knowledge-base/knowledge-and-documents-maintenance/introduction.mdx b/en/guides/knowledge-base/knowledge-and-documents-maintenance/introduction.mdx similarity index 100% rename from en/user-guide/knowledge-base/knowledge-and-documents-maintenance/introduction.mdx rename to en/guides/knowledge-base/knowledge-and-documents-maintenance/introduction.mdx diff --git a/en/user-guide/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.mdx b/en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.mdx similarity index 100% rename from en/user-guide/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.mdx rename to en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.mdx diff --git a/en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx b/en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx new file mode 100644 index 00000000..2d02dd83 --- /dev/null +++ b/en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx @@ -0,0 +1,242 @@ +--- +title: Knowledge Base and Document Maintenance +--- + +## Manage Documentations in the Knowledge Base + +### Adding Documentations + +A knowledge base is a collection of documents. Documents can be uploaded by developers or operators, or synchronized from other data sources. Each document in the knowledge base corresponds to a file in its data source—for example, a Notion document or an online webpage. + +To upload a new document to an existing knowledge base, go to **Knowledge Base** > **Documents** and click **Add File**. + +![Uploading the new documentation on Knowledge Base](https://assets-docs.dify.ai/2024/12/424ab491aaebe09b490a36d26c9fa8da.png) + +### Disable / Archive / Delete document + +**Enable**: Documents that are currently in normal status can be edited and retrieved in the knowledge base. If a document has been disabled, you can re-enable it. For archived documents, you must first unarchive them before re-enabling. + +**Disable**: If you don't want a document to be indexed during use, toggle off the blue switch on the right side of the document to disable it. A disabled document can still be edited or modified. + +**Archive**: For older documents that are no longer in use but you don't want to delete, you can archive them. Archived documents can only be viewed or deleted and cannot be edited. You can archive a document from the Knowledge Base's **Document List** by clicking the **Archive** button, or within the document's details page. Archiving can be undone. + +**Delete**: ⚠️ Dangerous Option. For incorrect documents or clearly ambiguous content, select Delete from the menu on the right side of the document. Deleted content cannot be restored, so proceed with caution. + +> The above options all support batch operations after multiple documents are selected. + +![Batch file Operations](https://assets-docs.dify.ai/2024/12/5e0e64859a1ac51602d167ec55ef9350.png) + +**Note:** + +If there are some documents in your knowledge base that haven't been updated or retrieved for a while, the system will disable inactive documents to ensure optimal performance. + +* For Sandbox users, the "inactive document disable period" is **after 7 days**. +* For Professional and Team users, it is **after 30 days**. You can revert these documents and continue using them at any time by clicking the "Enable" button in the knowledge base. + +You can revert these disable documents and continue using them at any time by clicking the "Enable" button in the knowledge base. Paid users are provided with **one-click revert** function. + +![One-click revert](https://assets-docs.dify.ai/2024/12/bf6485b17aec716741eb65e307c2274c.png) + +*** + +## Managing Text Chunks + +### Viewing Text Chunks + +In the knowledge base, each uploaded document is stored as text chunks. By clicking on the document title, you can view the list of chunks and their specific text content on the details page. Each page displays 10 chunks by default, but you can change the number of chunks shown per page at the bottom of the web. + +Only the first two lines of each content chunk are visible in the preview. If you need to see the full text within a chunk, click the "Expand Chunk" button for a complete view. + +![Expand text chunks](https://assets-docs.dify.ai/2024/12/86cc80f17fab1eea75aa73ee681e4663.png) + +You can quickly view all enabled or disabled documents using the filter. + +![Filter text chunks](https://assets-docs.dify.ai/2025/01/47ef07319175a102bfd1692dcc6cac9b.png) + +Different [chunking modes](../create-knowledge-and-upload-documents/2.-choose-a-chunk-mode.md) correspond to different text chunking preview methods: + + + + **General Mode** + + Chunks of text in [General mode](../create-knowledge-and-upload-documents.md#general) are independent blocks. If you want to view the complete content of a chunk, click the **full-screen** icon. + + ![Full screen viewing](https://assets-docs.dify.ai/2024/12/c37a1a247092cda9433a10243543698f.png) + + Tap the document title at the top to quickly switch to other documents in the knowledge base. + + ![General mode - text chunking](https://assets-docs.dify.ai/2024/12/4422286c6d254e13c1ab59b147f0ffbf.png) + + + **Parent-child Mode** + + In[ Parent-child](maintain-knowledge-documents.md#parent-child-chunking-mode) mode, content is divided into parent chunks and child chunks. + + * **Parent chunks** + + After selecting a document in the knowledge base, you'll first see the parent chunk content. Parent chunks can be split by **Paragraph** or **Full Doc**, offering a more comprehensive context. The illustration below shows how the text preview differs between these split modes. + + ![Difference in preview between paragraph and full doc](https://assets-docs.dify.ai/2024/12/b3961da2536dc922496ef6646315b9f4.png) + + * **Child chunks** + + Child chunks are usually sentences (smaller text blocks) within a paragraph, containing more detailed information. Each chunk displays its character count and the number of times it has been retrieved. Tapping **"Child Chunks"** reveals more details. If you want to see the full content of a chunk, click the full-screen icon in the top-right corner of that chunk to enter full-screen reading mode. + + ![Parent-child mode - text chunking](https://assets-docs.dify.ai/2024/12/c0776f91e155bb1c961ae255bb98f39e.png) + + + **Q\&A Mode** + + In Q\&A Mode, a content chunk consists of a question and an answer. Click on any document title to view the text chunks. + + ![Q&A Mode - check content chunk](https://assets-docs.dify.ai/2024/12/98e2486f6c5e06b4ece1b81d078afa08.png) + + + +*** + +### Checking Chunk Quality + +Document chunking significantly influences the Q\&A performance of knowledge-base applications. It's recommended to perform a manual review of chunking quality before integrating the knowledge base with your application. + +Although automated chunk methods based on character length, identifiers, or NLP semantic system can significantly reduce the workload of large-scale text chunk, the quality of chunk is related to the text structure of different document formats and the semantic context. Manual checking and correction can effectively compensate for the shortcomings of machine chunk in semantic recognition. + +When checking chunk quality, pay attention to the following situations: + +* **Overly short text chunks**, leading to semantic loss; + +![Overly short text chunks](https://assets-docs.dify.ai/2024/12/ee081e98c1649aea4a5c2b15b88e11aa.png) + +* **Overly long text chunks**, leading to semantic noise affecting matching accuracy; + +![Overly long text chunks](https://assets-docs.dify.ai/2024/12/ac47381ae4be183768dd025c37c049fa.png) + +* **Obvious semantic truncation**, which occurs when using maximum segment length limits, leading to forced semantic truncation and missing content during recall; + +![Obvious semantic truncation](https://assets-docs.dify.ai/2024/12/b8ab7ac84028b0b16c3948f35015e069.png) + +*** + +### Adding Text Chunks + +You can add text chunks individually to the knowledge base, and different chunking modes correspond to different ways of adding those chunks. + + +Adding text chunks is a paid feature. Please upgrade your account [here](https://dify.ai/pricing) to access this functionality. + + + + + **General Mode** + + Click **Add Chunks** in the chunks list page to add one or multiple custom chunks to the document. + + ![General mode - Add chunks](https://assets-docs.dify.ai/2024/12/552ff4ab9e77130ad09aaef878b19cc9.png) + + When manually adding text chunks, you can choose to add both the main content and keywords. After entering the content, select the **"Add another"** checkbox at the bottom to continue adding more text chunks seamlessly. + + ![General mode - Add another text chunk](https://assets-docs.dify.ai/2024/12/cd769622bc1d85c037277ef6fa5247c9.png) + + To add chunks in bulk, you need to download the upload template in CSV format first and edit all the chunk contents in Excel according to the template format, then save the CSV file and upload it. + + ![General mode - Add customize chunks in bulk](https://assets-docs.dify.ai/2024/12/5e501dd8efba02ff31d2e739417ce864.png) + + + **Parent Child Chunks Mode** + + Click Add Chunks in the Chunk list to add one or multiple custom **parent chunks** to the document. + + ![Parent-child mode - Add chunks](https://assets-docs.dify.ai/2024/12/ed4be3bf178e3a41d53bcc10255ad3b2.png) + + After entering the content, select the **"Add another"** checkbox at the bottom to keep adding more text chunks. + + ![Parent-child mode - Add chunks 2](https://assets-docs.dify.ai/2024/12/ba64232eea364b68f2e38341eb9cf5c1.png) + + You can add child chunks individually under a parent chunk. Click "Add" on the right side of the child chunk within the parent chunk to add it. + + ![Parent-child mode - Add child chunks](https://assets-docs.dify.ai/2024/12/23f68a369eb9c1a2cc9022b99a08341d.png) + + + **Q\&A Mode** + + Click the "Add Chunk" button at the top of the chunk list to manually add a single or multiple question-answer pairs chunk to the document. + + + +*** + +### Editing Text Chunks + + + + **General Mode** + + You can directly edit or modify the added chunks content, including modifying the **text content or keywords within the chunks.** + + To prevent duplicate edits, an "Edited" tag will appear on the content chunk after it has been modified. + + ![Edit text chunks](https://assets-docs.dify.ai/2024/12/92e7788dad008d38f7c8f532fbcb3636.png) + + + **Parent-child Mode** + + A parent chunk contains the content of its child chunks, but they remain independent. You can edit the parent chunk or child chunks separately. Below is a diagram explaining the process of modifying parent and child chunks: + + ![Diagram of editing parent-child chunks](https://assets-docs.dify.ai/2024/12/aacdb2e95b9b7c0265455caaf0f1f55f.png) + + **To edit a parent chunk:** + + 1\. Click the Edit button on the right side of the parent chunk. + + 2\. Enter your changes and then click **Save**—this won't affect the content of the child chunks. + + 3\. If you want to regenerate the child chunks after editing, click Save and Re-generate Child Chunks. + + To prevent duplicate edits, an "Edited" tag will appear on the content chunk after it has been modified. + + ![Parent-chid chunks mode - Modify parent chunks](https://assets-docs.dify.ai/2024/12/06354a75368f96b3f8f2afaad4f50b0c.png) + + **Modify child chunks**: select any child chunks and enter edit mode and save it after modification. The modification will not affect the contents of the parent chunks. Child chunks that have been edited or newly added will be marked with a deep blue label, `C-NUMBER-EDITED`. + + You can also treat child chunks as tags for the current parent text block. + + ![Parent-child mode - modify child chunks](https://assets-docs.dify.ai/2024/12/a59563614d8f4661ebfb20f6b646b4ea.png) + + + **Q\&A Mode** + + In Q\&A chunking mode, each content chunk consists of a question and an answer. Click on the text chunk you wish to edit to modify the question and answer individually. Additionally, you can edit the keywords for the current chunk. + + ![Q&A Mode - modify text chunks](https://assets-docs.dify.ai/2024/12/5c69adc0d4ec470d0677e67a4dd894a1.png) + + + +### Modify Text Chunks for Uploaded Documents + +Knowledge Base supports reconfiguring document segmentation. + +**Larger Chunks** + +* Retain more context within each chunk, ideal for tasks requiring a broader understanding of the text. +* Reduce the total number of chunks, lowering processing time and storage overhead. + +**Smaller Chunks** + +* Provide finer granularity, improving accuracy for tasks like extraction or summarization. +* Reduce the risk of exceeding model token limits, making it safer for models with stricter constraints. + +Go to **Chunk Settings**, adjust the settings, and click **Save & Process** to save changes and reprocess the document. The chunk list will update automatically once processing is complete—no page refresh needed. + +![Chunk Settings](https://assets-docs.dify.ai/2025/01/36cb20be8aae1f368ebf501c0d579051.png) + +![Save & Process](https://assets-docs.dify.ai/2025/01/a47b890c575a7693c40303d3d7cb4952.png) + +*** + +### Metadata + +In addition to capturing metadata (e.g., title, URL, keywords, or a web page description) from various source documents, metadata is also used as structured fields during the chunk retrieval process for filtering or displaying citation sources. + +![Metadata management](https://assets-docs.dify.ai/2024/12/f3b1ff4b559ebc40f18b8980b3719fe8.png) + +*** \ No newline at end of file diff --git a/en/user-guide/knowledge-base/knowledge-base-creation/introduction.mdx b/en/guides/knowledge-base/knowledge-base-creation/introduction.mdx similarity index 100% rename from en/user-guide/knowledge-base/knowledge-base-creation/introduction.mdx rename to en/guides/knowledge-base/knowledge-base-creation/introduction.mdx diff --git a/en/user-guide/knowledge-base/knowledge-base-creation/sync-from-notion.mdx b/en/guides/knowledge-base/knowledge-base-creation/sync-from-notion.mdx similarity index 100% rename from en/user-guide/knowledge-base/knowledge-base-creation/sync-from-notion.mdx rename to en/guides/knowledge-base/knowledge-base-creation/sync-from-notion.mdx diff --git a/en/user-guide/knowledge-base/knowledge-base-creation/sync-from-website.mdx b/en/guides/knowledge-base/knowledge-base-creation/sync-from-website.mdx similarity index 100% rename from en/user-guide/knowledge-base/knowledge-base-creation/sync-from-website.mdx rename to en/guides/knowledge-base/knowledge-base-creation/sync-from-website.mdx diff --git a/en/user-guide/knowledge-base/knowledge-base-creation/upload-documents.mdx b/en/guides/knowledge-base/knowledge-base-creation/upload-documents.mdx similarity index 100% rename from en/user-guide/knowledge-base/knowledge-base-creation/upload-documents.mdx rename to en/guides/knowledge-base/knowledge-base-creation/upload-documents.mdx diff --git a/en/user-guide/knowledge-base/metadata.mdx b/en/guides/knowledge-base/metadata.mdx similarity index 100% rename from en/user-guide/knowledge-base/metadata.mdx rename to en/guides/knowledge-base/metadata.mdx diff --git a/en/user-guide/knowledge-base/readme.mdx b/en/guides/knowledge-base/readme.mdx similarity index 100% rename from en/user-guide/knowledge-base/readme.mdx rename to en/guides/knowledge-base/readme.mdx diff --git a/en/user-guide/knowledge-base/retrieval-test-and-citation.mdx b/en/guides/knowledge-base/retrieval-test-and-citation.mdx similarity index 100% rename from en/user-guide/knowledge-base/retrieval-test-and-citation.mdx rename to en/guides/knowledge-base/retrieval-test-and-citation.mdx diff --git a/en/management/app-management.mdx b/en/guides/management/app-management.mdx similarity index 58% rename from en/management/app-management.mdx rename to en/guides/management/app-management.mdx index d31ea938..24334558 100644 --- a/en/management/app-management.mdx +++ b/en/guides/management/app-management.mdx @@ -2,18 +2,17 @@ title: App Management --- + ### Editing Application Information After creating an application, if you want to modify the application name or description, you can click "Edit info" in the upper left corner of the application to revise the application's icon, name, or description. -![Edit App Info](/images/assets/image-(92).png) +![Edit App Info](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/63a449e9a8ae337b9c067165d1674a45.png) ### Duplicating Application All applications support copying. Click "Duplicate" in the upper left corner of the application. -### Switch to Workflow Orchestrate - ### Exporting Application Applications created in Dify support export in DSL format files, allowing you to import the configuration files into other Dify teams freely. You can export DSL files using either of the following two methods: @@ -21,22 +20,31 @@ Applications created in Dify support export in DSL format files, allowing you to * Click "Export DSL" in the application menu button on the "Studio" page * After entering the application's orchestration page, click "Export DSL" in the upper left corner -![](/images/assets/export-dsl.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/544c18d770e230db93d6756bba98d8a7.png) -The DSL file does not include authorization information already filled in [Tool](/en-us/user-guide/build-app/flow-app/nodes/tools) nodes, such as API keys for third-party services. +The DSL file does not include authorization information already filled in [Tool](../workflow/node/tools.md) nodes, such as API keys for third-party services. If the environment variables contain variables of the `Secret` type, a prompt will appear during file export asking whether to allow the export of this sensitive information. -![](/images/assets/export-dsl-secret.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/25ce002ef7f0392fc6b3b6975ae137ec.png) - + Dify DSL is an AI application engineering file standard defined by Dify.AI in v0.6 and later. The file format is YML. This standard covers the basic description of the application, model parameters, orchestration configuration, and other information. - + + +### Importing Application + +To import a Dify application, upload the DSL file to the Dify platform. A version check will be conducted during the import process, and a warning will be issued if a lower version of the DSL file is detected. + +- For SaaS users, the DSL file exported from the SaaS platform will always be the latest version. +- For Community users, it is recommended to consult [Upgrade Dify](https://docs.dify.ai/getting-started/install-self-hosted/docker-compose#upgrade-dify) to update the Community Edition and export an updated version of the DSL file, thus avoiding potential compatibility issues. + +![](https://assets-docs.dify.ai/2024/11/487d2c1cc8b86666feb35ea8a346c053.png) ### Deleting Application If you want to remove an application, you can click "Delete" in the upper left corner of the application. - + ⚠️ The deletion of an application cannot be undone. All users will be unable to access your application, and all prompts, orchestration configurations, and logs within the application will be deleted. - + diff --git a/en/guides/management/personal-account-management.mdx b/en/guides/management/personal-account-management.mdx new file mode 100644 index 00000000..511a2d0c --- /dev/null +++ b/en/guides/management/personal-account-management.mdx @@ -0,0 +1,92 @@ +--- +title: Personal Account Management +--- + + +## Login Methods + +The login methods supported by different versions of Dify are as follows: + +
VersionLogin Method
CommunityEmail and password
CloudGitHub account authorization, Google account authorization, email and verification code
+ +> Note: For Dify Cloud Service, if the email associated with a GitHub or Google account is the same as the email used to log in with a verification code, the system will automatically link them as the same account, avoiding the need for manual binding and preventing duplicate registrations. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/c4a9bb46f636807f0b59710724fddc40.png) + +## Modifying Personal Information + +To update your personal account information: + +1. Navigate to the Dify team homepage +2. Click on your avatar in the upper right corner +3. Select **"My Account"** + +You can modify the following details: + +* Avatar +* Username +* Email +* Password + +> Note: The password reset feature is only available in the Community Version. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/527c6752596356c263262f5e24ef8941.png) + +### Login Methods + +Supports 3 login methods: email + verification code, Google authentication, and GitHub authentication. The same Dify account can log in directly using email + verification code or through Google/GitHub authentication linked to the same email, without the need for additional binding. + +### Changing Display Language + +To change the display language, click on your avatar in the upper right corner of the Dify team homepage, then click **"Language"**. Dify supports the following languages: + +* English +* Simplified Chinese +* Traditional Chinese +* Portuguese (Brazil) +* French (France) +* Japanese (Japan) +* Korean (South Korea) +* Russian (Russia) +* Italian (Italy) +* Thai (Thailand) +* Indonesian +* Ukrainian (Ukraine) + +Dify welcomes community volunteers to contribute additional language versions. Visit the [GitHub repository](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) to contribute! + +### View Apps Linked to Your Account + +You can view the apps currently linked to your account on the **Account** page. + +### Delete Personal Account + +⚠️ Dangerous Operation. Please proceed with caution. + +To confirm the deletion of your Dify SaaS account, click on your avatar in the top right corner, select **“Account”** from the dropdown menu, and then click the **“Delete Account”** button. + +Deleting your account is irreversible, and the same email address cannot be re-registered within 30 days. All workspaces owned by the account will also be deleted, and it will be automatically removed from all shared workspaces. + +Enter the email address you want to delete and the confirmation verification code. Afterward, the system will permanently delete all information related to the account. + +![Delete Personal Account](https://assets-docs.dify.ai/2024/12/ded326f27886b5884969c220ead998d7.png) + +### FAQ + +**1. Can I revert the account deletion if I accidentally delete my account?**\ +Account deletion is irreversible. If there are exceptional circumstances, please contact us at `support@dify.ai` within 20 days of the deletion and provide a detailed explanation. + +**2. What happens to my roles and data in the team after I delete my account?**\ +After account deletion: + +* If you were the **team owner**, the workspace(s) you created will be dissolved, and all data within those workspaces will be deleted. Team members will lose access to the workspace. +* If you were a **team member or admin**, the workspaces you joined will retain their data, including the applications created by your account. Your account will be removed from the member list of those workspaces. + +**3. Can I re-register a new account with the same email after deleting my account?**\ +You cannot re-register a new account using the same email within 30 days of account deletion. + +**4. Will my authorizations with third-party services (e.g., Google, GitHub) be revoked after deleting my account?**\ +Yes, all authorizations with third-party services (e.g., Google, GitHub) will be automatically revoked after account deletion. + +**5. Will my Dify subscription be canceled and refunded after deleting my account?**\ +Your Dify subscription will be automatically canceled upon account deletion. However, the subscription fee is non-refundable, and no future charges will be made. diff --git a/en/management/subscription-management.mdx b/en/guides/management/subscription-management.mdx similarity index 68% rename from en/management/subscription-management.mdx rename to en/guides/management/subscription-management.mdx index 9c3824c4..3f1bd00f 100644 --- a/en/management/subscription-management.mdx +++ b/en/guides/management/subscription-management.mdx @@ -2,6 +2,7 @@ title: Subscription Management --- + ### Upgrading Dify Team Subscription Team owners and administrators can upgrade the team subscription plan. Click the **"Upgrade"** button in the upper right corner of the Dify team homepage, select an appropriate package, and complete the payment to upgrade the team's subscription. @@ -12,7 +13,7 @@ After subscribing to Dify's paid services (Professional or Team plan), team owne On the billing page, you can view the usage statistics for various team resources. -![Team billing management](/images/assets/subscription-management-01.png) +![Team billing management](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/07eb0d03a33e2e01df44dbf3cf241f14.png) ### Frequently Asked Questions @@ -23,21 +24,60 @@ Team owners and administrators can navigate to **Settings** → **Billing**, the * Upgrading from Professional to Team plan requires paying the difference for the current month and takes effect immediately. * Downgrading from Team to Professional plan takes effect immediately. -![Changing the paid plan](/images/assets/subscription-management-02.jpeg) +![Changing the paid plan](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/c572ba8806b41eb6564fc658d3d8124b.jpeg) Upon cancellation of the subscription plan, **the team will automatically transition to the Sandbox/Free plan at the end of the current billing cycle**. Subsequently, any team members and resources exceeding the Sandbox/Free plan limitations will become inaccessible. #### 2. What changes will occur to the team's available resources after upgrading the subscription plan? -| Resource | Free | Professional | Team | -| ---------------------------------------------------------------------------- | --------- | -------------- | --------------- | -| Team member limit | 1 | 3 | Unlimited | -| Application limit | 10 | 50 | Unlimited | -| Vector space capacity | 5MB | 200MB | 1GB | -| [Marked replies](https://docs.dify.ai/guides/biao-zhu/logs) for applications | 10 | 2000 | 5000 | -| Document uploads for knowledge base | 50 | 500 | 1000 | -| OpenAI conversation quota | 200 total | 5000 per month | 10000 per month | - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ResourceFreeProfessionalTeam
Team member limit13Unlimited
Application limit1050Unlimited
Vector space capacity5MB200MB1GB
Marked replies for applications1020005000
Document uploads for knowledge base505001000
OpenAI conversation quota200 total5000 per month10000 per month
Note: * When upgrading from Free to Professional, all resources are increased as shown in the table. diff --git a/en/management/team-members-management.mdx b/en/guides/management/team-members-management.mdx similarity index 66% rename from en/management/team-members-management.mdx rename to en/guides/management/team-members-management.mdx index 1c479c81..cb964e8c 100644 --- a/en/management/team-members-management.mdx +++ b/en/guides/management/team-members-management.mdx @@ -2,19 +2,45 @@ title: Team Members Management --- + This guide explains how to manage members within a Dify team. The team member limits for different Dify versions are below. + + + + + + + + + + + + + + + + + + + +
Sandbox / FreeProfessionalTeamCommunityEnterprise
13UnlimitedUnlimitedUnlimited
### Adding Members - + Only team owners have permission to invite team members. - + To add a member, the team owner can click on the avatar in the upper right corner, then select **"Members"** → **"Add"**. Enter the email address and assign member permissions to complete the process. -![Assigning permissions to team members](/images/assets/team-members-management-01.png) +![Assigning permissions to team members](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/bbd0873959dd3fe342b7212b98e812ae.png) -Invited members can complete their registration through either a URL link or an email invitation. +> For Community Edition, enabling email functionality requires the team owner to configure and activate the email service via system [environment variables](https://docs.dify.ai/getting-started/install-self-hosted/environments). + +- If the invited member has not registered with Dify, they will receive an invitation email. They can complete registration by clicking the link in the email. +- If the invited member is already registered with Dify, permissions will be automatically assigned and **no invitation email will be sent**. The invited member can switch to the new workspace via the menu in the top right corner. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/93a6f055cfaf65dfe138e8ac332f71d1.png) ### Member Permissions @@ -35,13 +61,13 @@ Team members are divided into owners, administrators, editors, and members. ### Removing Members - + Only team owners have permission to remove team members. - + To remove a member, click on the avatar in the upper right corner of the Dify team homepage, navigate to **"Settings"** → **"Members"**, select the member to be removed, and click **"Remove from team"**. -![Removing a member](/images/assets/team-members-management-02.png) +![Removing a member](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/management/0596a58b4fc59c9a0fae24bdff90b769.png) ### Frequently Asked Questions diff --git a/en/management/version-control.mdx b/en/guides/management/version-control.mdx similarity index 100% rename from en/management/version-control.mdx rename to en/guides/management/version-control.mdx diff --git a/zh-hans/guides/model-configuration/customizable-model.md b/en/guides/model-configuration/customizable-model.mdx similarity index 50% rename from zh-hans/guides/model-configuration/customizable-model.md rename to en/guides/model-configuration/customizable-model.mdx index ff0754bc..b712dcdd 100644 --- a/zh-hans/guides/model-configuration/customizable-model.md +++ b/en/guides/model-configuration/customizable-model.mdx @@ -1,59 +1,62 @@ -# 自定义模型接入 +--- +title: Custom Model Integration +--- -### 介绍 -供应商集成完成后,接下来为供应商下模型的接入,为了帮助理解整个接入过程,我们以`Xinference`为例,逐步完成一个完整的供应商接入。 +### Introduction -需要注意的是,对于自定义模型,每一个模型的接入都需要填写一个完整的供应商凭据。 +After completing vendor integration, the next step is to integrate models under the vendor. To help understand the entire integration process, we will use `Xinference` as an example to gradually complete a full vendor integration. -而不同于预定义模型,自定义供应商接入时永远会拥有如下两个参数,不需要在供应商 yaml 中定义。 +It is important to note that for custom models, each model integration requires a complete vendor credential. -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/019f6cf5cf1e2a3bda953fea5fa1d158.png) +Unlike predefined models, custom vendor integration will always have the following two parameters, which do not need to be defined in the vendor YAML file. -在前文中,我们已经知道了供应商无需实现`validate_provider_credential`,Runtime会自行根据用户在此选择的模型类型和模型名称调用对应的模型层的`validate_credentials`来进行验证。 +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/30f5d554e9f42670c32b6d877dccdff2.png) -#### 编写供应商 yaml +In the previous section, we have learned that vendors do not need to implement `validate_provider_credential`. The Runtime will automatically call the corresponding model layer's `validate_credentials` based on the model type and model name selected by the user for validation. -我们首先要确定,接入的这个供应商支持哪些类型的模型。 +#### Writing Vendor YAML -当前支持模型类型如下: +First, we need to determine what types of models the vendor supports. -* `llm` 文本生成模型 -* `text_embedding` 文本 Embedding 模型 -* `rerank` Rerank 模型 -* `speech2text` 语音转文字 -* `tts` 文字转语音 -* `moderation` 审查 +Currently supported model types are as follows: -`Xinference`支持`LLM`、`Text Embedding`和`Rerank`,那么我们开始编写`xinference.yaml`。 +* `llm` Text Generation Model +* `text_embedding` Text Embedding Model +* `rerank` Rerank Model +* `speech2text` Speech to Text +* `tts` Text to Speech +* `moderation` Moderation + +`Xinference` supports `LLM`, `Text Embedding`, and `Rerank`, so we will start writing `xinference.yaml`. ```yaml -provider: xinference #确定供应商标识 -label: # 供应商展示名称,可设置 en_US 英文、zh_Hans 中文两种语言,zh_Hans 不设置将默认使用 en_US。 +provider: xinference # Specify vendor identifier +label: # Vendor display name, can be set in en_US (English) and zh_Hans (Simplified Chinese). If zh_Hans is not set, en_US will be used by default. en_US: Xorbits Inference -icon_small: # 小图标,可以参考其他供应商的图标,存储在对应供应商实现目录下的 _assets 目录,中英文策略同 label +icon_small: # Small icon, refer to other vendors' icons, stored in the _assets directory under the corresponding vendor implementation directory. Language strategy is the same as label. en_US: icon_s_en.svg -icon_large: # 大图标 +icon_large: # Large icon en_US: icon_l_en.svg -help: # 帮助 +help: # Help title: en_US: How to deploy Xinference zh_Hans: 如何部署 Xinference url: en_US: https://github.com/xorbitsai/inference -supported_model_types: # 支持的模型类型,Xinference同时支持LLM/Text Embedding/Rerank +supported_model_types: # Supported model types. Xinference supports LLM/Text Embedding/Rerank - llm - text-embedding - rerank -configurate_methods: # 因为Xinference为本地部署的供应商,并且没有预定义模型,需要用什么模型需要根据Xinference的文档自己部署,所以这里只支持自定义模型 +configurate_methods: # Since Xinference is a locally deployed vendor and does not have predefined models, you need to deploy the required models according to Xinference's documentation. Therefore, only custom models are supported here. - customizable-model provider_credential_schema: credential_form_schemas: ``` -随后,我们需要思考在 Xinference 中定义一个模型需要哪些凭据 +Next, we need to consider what credentials are required to define a model in Xinference. -* 它支持三种不同的模型,因此,我们需要有`model_type`来指定这个模型的类型,它有三种类型,所以我们这么编写 +* It supports three different types of models, so we need `model_type` to specify the type of the model. It has three types, so we write it as follows: ```yaml provider_credential_schema: @@ -77,7 +80,7 @@ provider_credential_schema: en_US: Rerank ``` -* 每一个模型都有自己的名称`model_name`,因此需要在这里定义 +* Each model has its own name `model_name`, so we need to define it here. ```yaml - variable: model_name @@ -91,7 +94,7 @@ provider_credential_schema: en_US: Input model name ``` -* 填写 Xinference 本地部署的地址 +* Provide the address for the local deployment of Xinference. ```yaml - variable: server_url @@ -105,7 +108,7 @@ provider_credential_schema: en_US: Enter the url of your Xinference, for example https://example.com/xxx ``` -* 每个模型都有唯一的 model\_uid,因此需要在这里定义 +* Each model has a unique `model_uid`, so we need to define it here. ```yaml - variable: model_uid @@ -119,22 +122,22 @@ provider_credential_schema: en_US: Enter the model uid ``` -现在,我们就完成了供应商的基础定义。 +Now, we have completed the basic definition of the vendor. -#### 编写模型代码 +#### Writing Model Code -然后我们以`llm`类型为例,编写`xinference.llm.llm.py` +Next, we will take the `llm` type as an example and write `xinference.llm.llm.py`. -在 `llm.py` 中创建一个 Xinference LLM 类,我们取名为 `XinferenceAILargeLanguageModel`(随意),继承 `__base.large_language_model.LargeLanguageModel` 基类,实现以下几个方法: +In `llm.py`, create a Xinference LLM class, which we will name `XinferenceAILargeLanguageModel` (arbitrary name), inheriting from the `__base.large_language_model.LargeLanguageModel` base class. Implement the following methods: -* LLM 调用 +* LLM Invocation - 实现 LLM 调用的核心方法,可同时支持流式和同步返回。 + Implement the core method for LLM invocation, which can support both streaming and synchronous returns. ```python def _invoke(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict, - tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None, + tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, stream: bool = True, user: Optional[str] = None) \ -> Union[LLMResult, Generator]: """ @@ -152,7 +155,7 @@ provider_credential_schema: """ ``` - 在实现时,需要注意使用两个函数来返回数据,分别用于处理同步返回和流式返回,因为Python会将函数中包含 `yield` 关键字的函数识别为生成器函数,返回的数据类型固定为 `Generator`,因此同步和流式返回需要分别实现,就像下面这样(注意下面例子使用了简化参数,实际实现时需要按照上面的参数列表进行实现): + When implementing, note that you need to use two functions to return data, one for handling synchronous returns and one for streaming returns. This is because Python identifies functions containing the `yield` keyword as generator functions, and the return data type is fixed as `Generator`. Therefore, synchronous and streaming returns need to be implemented separately, as shown below (note that the example uses simplified parameters; the actual implementation should follow the parameter list above): ```python def _invoke(self, stream: bool, **kwargs) \ @@ -167,9 +170,9 @@ provider_credential_schema: def _handle_sync_response(self, **kwargs) -> LLMResult: return LLMResult(**response) ``` -* 预计算输入 tokens +* Precompute Input Tokens - 若模型未提供预计算 tokens 接口,可直接返回 0。 + If the model does not provide a precompute tokens interface, it can directly return 0. ```python def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], @@ -185,10 +188,10 @@ provider_credential_schema: """ ``` - 有时候,也许你不需要直接返回0,所以你可以使用`self._get_num_tokens_by_gpt2(text: str)`来获取预计算的tokens,这个方法位于`AIModel`基类中,它会使用GPT2的Tokenizer进行计算,但是只能作为替代方法,并不完全准确。 -* 模型凭据校验 + Sometimes, you may not want to directly return 0, so you can use `self._get_num_tokens_by_gpt2(text: str)` to get precomputed tokens. This method is located in the `AIModel` base class and uses GPT2's Tokenizer for calculation. However, it can only be used as an alternative method and is not completely accurate. +* Model Credential Validation - 与供应商凭据校验类似,这里针对单个模型进行校验。 + Similar to vendor credential validation, this is for validating individual model credentials. ```python def validate_credentials(self, model: str, credentials: dict) -> None: @@ -200,18 +203,18 @@ provider_credential_schema: :return: """ ``` -* 模型参数 Schema +* Model Parameter Schema - 与自定义类型不同,由于没有在 yaml 文件中定义一个模型支持哪些参数,因此,我们需要动态实现模型参数的Schema。 + Unlike custom types, since a model's supported parameters are not defined in the YAML file, we need to dynamically generate the model parameter schema. - 如Xinference支持`max_tokens` `temperature` `top_p` 这三个模型参数。 + For example, Xinference supports the `max_tokens`, `temperature`, and `top_p` parameters. - 但是有的供应商根据不同的模型支持不同的参数,如供应商`OpenLLM`支持`top_k`,但是并不是这个供应商提供的所有模型都支持`top_k`,我们这里举例 A 模型支持`top_k`,B模型不支持`top_k`,那么我们需要在这里动态生成模型参数的 Schema,如下所示: + However, some vendors support different parameters depending on the model. For instance, the vendor `OpenLLM` supports `top_k`, but not all models provided by this vendor support `top_k`. Here, we illustrate that Model A supports `top_k`, while Model B does not. Therefore, we need to dynamically generate the model parameter schema, as shown below: ```python def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None: """ - used to define customizable model schema + Used to define customizable model schema """ rules = [ ParameterRule( @@ -272,17 +275,17 @@ provider_credential_schema: return entity ``` -* 调用异常错误映射表 +* Invocation Error Mapping Table - 当模型调用异常时需要映射到 Runtime 指定的 `InvokeError` 类型,方便 Dify 针对不同错误做不同后续处理。 + When a model invocation error occurs, it needs to be mapped to the Runtime-specified `InvokeError` type to facilitate Dify's different subsequent processing for different errors. Runtime Errors: - * `InvokeConnectionError` 调用连接错误 - * `InvokeServerUnavailableError` 调用服务方不可用 - * `InvokeRateLimitError` 调用达到限额 - * `InvokeAuthorizationError` 调用鉴权失败 - * `InvokeBadRequestError` 调用传参有误 + * `InvokeConnectionError` Invocation connection error + * `InvokeServerUnavailableError` Invocation server unavailable + * `InvokeRateLimitError` Invocation rate limit reached + * `InvokeAuthorizationError` Invocation authorization failed + * `InvokeBadRequestError` Invocation parameter error ```python @property @@ -297,4 +300,4 @@ provider_credential_schema: """ ``` -接口方法说明见:[Interfaces](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/interfaces.md),具体实现可参考:[llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py)。 +For an explanation of interface methods, see: [Interfaces](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/en\_US/interfaces.md). For specific implementations, refer to: [llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model\_providers/anthropic/llm/llm.py). diff --git a/en/guides/model-configuration/interfaces.mdx b/en/guides/model-configuration/interfaces.mdx new file mode 100644 index 00000000..23f7728f --- /dev/null +++ b/en/guides/model-configuration/interfaces.mdx @@ -0,0 +1,709 @@ +--- +title: Interface Methods +--- + + +This section describes the interface methods and parameter explanations that need to be implemented by providers and various model types. + +## Provider + +Inherit the `__base.model_provider.ModelProvider` base class and implement the following interfaces: + +```python +def validate_provider_credentials(self, credentials: dict) -> None: + """ + Validate provider credentials + You can choose any validate_credentials method of model type or implement validate method by yourself, + such as: get model list api + + if validate failed, raise exception + + :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. + """ +``` + +- `credentials` (object) Credential information + + The parameters of credential information are defined by the `provider_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + +If verification fails, throw the `errors.validate.CredentialsValidateFailedError` error. + +## Model + +Models are divided into 5 different types, each inheriting from different base classes and requiring the implementation of different methods. + +All models need to uniformly implement the following 2 methods: + +- Model Credential Verification + + Similar to provider credential verification, this step involves verification for an individual model. + + + ```python + def validate_credentials(self, model: str, credentials: dict) -> None: + """ + Validate model credentials + + :param model: model name + :param credentials: model credentials + :return: + """ + ``` + + Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + If verification fails, throw the `errors.validate.CredentialsValidateFailedError` error. + +- Invocation Error Mapping Table + + When there is an exception in model invocation, it needs to be mapped to the `InvokeError` type specified by Runtime. This facilitates Dify's ability to handle different errors with appropriate follow-up actions. + + Runtime Errors: + + - `InvokeConnectionError` Invocation connection error + - `InvokeServerUnavailableError` Invocation service provider unavailable + - `InvokeRateLimitError` Invocation reached rate limit + - `InvokeAuthorizationError` Invocation authorization failure + - `InvokeBadRequestError` Invocation parameter error + + ```python + @property + def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + """ + Map model invoke error to unified error + The key is the error type thrown to the caller + The value is the error type thrown by the model, + which needs to be converted into a unified error type for the caller. + + :return: Invoke error mapping + """ + ``` + +​ You can refer to OpenAI's `_invoke_error_mapping` for an example. + +### LLM + +Inherit the `__base.large_language_model.LargeLanguageModel` base class and implement the following interfaces: + +- LLM Invocation + + Implement the core method for LLM invocation, which can support both streaming and synchronous returns. + + + ```python + def _invoke(self, model: str, credentials: dict, + prompt_messages: list[PromptMessage], model_parameters: dict, + tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, + stream: bool = True, user: Optional[str] = None) \ + -> Union[LLMResult, Generator]: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param model_parameters: model parameters + :param tools: tools for tool calling + :param stop: stop words + :param stream: is stream response + :param user: unique user id + :return: full response or stream response chunk generator result + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `prompt_messages` (array[[PromptMessage](#PromptMessage)]) List of prompts + + If the model is of the `Completion` type, the list only needs to include one [UserPromptMessage](#UserPromptMessage) element; + + If the model is of the `Chat` type, it requires a list of elements such as [SystemPromptMessage](#SystemPromptMessage), [UserPromptMessage](#UserPromptMessage), [AssistantPromptMessage](#AssistantPromptMessage), [ToolPromptMessage](#ToolPromptMessage) depending on the message. + + - `model_parameters` (object) Model parameters + + The model parameters are defined by the `parameter_rules` in the model's YAML configuration. + + - `tools` (array[[PromptMessageTool](#PromptMessageTool)]) [optional] List of tools, equivalent to the `function` in `function calling`. + + That is, the tool list for tool calling. + + - `stop` (array[string]) [optional] Stop sequences + + The model output will stop before the string defined by the stop sequence. + + - `stream` (bool) Whether to output in a streaming manner, default is True + + Streaming output returns Generator[[LLMResultChunk](#LLMResultChunk)], non-streaming output returns [LLMResult](#LLMResult). + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns + + Streaming output returns Generator[[LLMResultChunk](#LLMResultChunk)], non-streaming output returns [LLMResult](#LLMResult). + +- Pre-calculating Input Tokens + + If the model does not provide a pre-calculated tokens interface, you can directly return 0. + + ```python + def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], + tools: Optional[list[PromptMessageTool]] = None) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param tools: tools for tool calling + :return: + """ + ``` + + For parameter explanations, refer to the above section on `LLM Invocation`. + +- Fetch Custom Model Schema [Optional] + + ```python + def get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]: + """ + Get customizable model schema + + :param model: model name + :param credentials: model credentials + :return: model schema + """ + ``` + + When the provider supports adding custom LLMs, this method can be implemented to allow custom models to fetch model schema. The default return null. + + +### TextEmbedding + +Inherit the `__base.text_embedding_model.TextEmbeddingModel` base class and implement the following interfaces: + +- Embedding Invocation + + ```python + def _invoke(self, model: str, credentials: dict, + texts: list[str], user: Optional[str] = None) \ + -> TextEmbeddingResult: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param texts: texts to embed + :param user: unique user id + :return: embeddings result + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `texts` (array[string]) List of texts, capable of batch processing + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + [TextEmbeddingResult](#TextEmbeddingResult) entity. + +- Pre-calculating Tokens + + ```python + def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param texts: texts to embed + :return: + """ + ``` + + For parameter explanations, refer to the above section on `Embedding Invocation`. + +### Rerank + +Inherit the `__base.rerank_model.RerankModel` base class and implement the following interfaces: + +- Rerank Invocation + + ```python + def _invoke(self, model: str, credentials: dict, + query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, + user: Optional[str] = None) \ + -> RerankResult: + """ + Invoke rerank model + + :param model: model name + :param credentials: model credentials + :param query: search query + :param docs: docs for reranking + :param score_threshold: score threshold + :param top_n: top n + :param user: unique user id + :return: rerank result + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `query` (string) Query request content + + - `docs` (array[string]) List of segments to be reranked + + - `score_threshold` (float) [optional] Score threshold + + - `top_n` (int) [optional] Select the top n segments + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + [RerankResult](#RerankResult) entity. + +### Speech2text + +Inherit the `__base.speech2text_model.Speech2TextModel` base class and implement the following interfaces: + +- Invoke Invocation + + ```python + def _invoke(self, model: str, credentials: dict, file: IO[bytes], user: Optional[str] = None) -> str: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param file: audio file + :param user: unique user id + :return: text for given audio file + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `file` (File) File stream + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + The string after speech-to-text conversion. + +### Text2speech + +Inherit the `__base.text2speech_model.Text2SpeechModel` base class and implement the following interfaces: + +- Invoke Invocation + + ```python + def _invoke(self, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None): + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param content_text: text content to be translated + :param streaming: output is streaming + :param user: unique user id + :return: translated audio file + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `content_text` (string) The text content that needs to be converted + + - `streaming` (bool) Whether to stream output + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + Text converted speech stream。 + +### Moderation + +Inherit the `__base.moderation_model.ModerationModel` base class and implement the following interfaces: + +- Invoke Invocation + + ```python + def _invoke(self, model: str, credentials: dict, + text: str, user: Optional[str] = None) \ + -> bool: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param text: text to moderate + :param user: unique user id + :return: false if text is safe, true otherwise + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `text` (string) Text content + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + False indicates that the input text is safe, True indicates otherwise. + + + +## Entities + +### PromptMessageRole + +Message role + +```python +class PromptMessageRole(Enum): + """ + Enum class for prompt message. + """ + SYSTEM = "system" + USER = "user" + ASSISTANT = "assistant" + TOOL = "tool" +``` + +### PromptMessageContentType + +Message content types, divided into text and image. + +```python +class PromptMessageContentType(Enum): + """ + Enum class for prompt message content type. + """ + TEXT = 'text' + IMAGE = 'image' +``` + +### PromptMessageContent + +Message content base class, used only for parameter declaration and cannot be initialized. + +```python +class PromptMessageContent(BaseModel): + """ + Model class for prompt message content. + """ + type: PromptMessageContentType + data: str +``` + +Currently, two types are supported: text and image. It's possible to simultaneously input text and multiple images. + +You need to initialize `TextPromptMessageContent` and `ImagePromptMessageContent` separately for input. + +### TextPromptMessageContent + +```python +class TextPromptMessageContent(PromptMessageContent): + """ + Model class for text prompt message content. + """ + type: PromptMessageContentType = PromptMessageContentType.TEXT +``` + +If inputting a combination of text and images, the text needs to be constructed into this entity as part of the `content` list. + +### ImagePromptMessageContent + +```python +class ImagePromptMessageContent(PromptMessageContent): + """ + Model class for image prompt message content. + """ + class DETAIL(Enum): + LOW = 'low' + HIGH = 'high' + + type: PromptMessageContentType = PromptMessageContentType.IMAGE + detail: DETAIL = DETAIL.LOW # Resolution +``` + +If inputting a combination of text and images, the images need to be constructed into this entity as part of the `content` list. + +`data` can be either a `url` or a `base64` encoded string of the image. + +### PromptMessage + +The base class for all Role message bodies, used only for parameter declaration and cannot be initialized. + +```python +class PromptMessage(ABC, BaseModel): + """ + Model class for prompt message. + """ + role: PromptMessageRole + content: Optional[str | list[PromptMessageContent]] = None # Supports two types: string and content list. The content list is designed to meet the needs of multimodal inputs. For more details, see the PromptMessageContent explanation. + name: Optional[str] = None +``` + +### UserPromptMessage + +UserMessage message body, representing a user's message. + +```python +class UserPromptMessage(PromptMessage): + """ + Model class for user prompt message. + """ + role: PromptMessageRole = PromptMessageRole.USER +``` + +### AssistantPromptMessage + +Represents a message returned by the model, typically used for `few-shots` or inputting chat history. + +```python +class AssistantPromptMessage(PromptMessage): + """ + Model class for assistant prompt message. + """ + class ToolCall(BaseModel): + """ + Model class for assistant prompt message tool call. + """ + class ToolCallFunction(BaseModel): + """ + Model class for assistant prompt message tool call function. + """ + name: str # tool name + arguments: str # tool arguments + + id: str # Tool ID, effective only in OpenAI tool calls. It's the unique ID for tool invocation and the same tool can be called multiple times. + type: str # default: function + function: ToolCallFunction # tool call information + + role: PromptMessageRole = PromptMessageRole.ASSISTANT + tool_calls: list[ToolCall] = [] # The result of tool invocation in response from the model (returned only when tools are input and the model deems it necessary to invoke a tool). +``` + +Where `tool_calls` are the list of `tool calls` returned by the model after invoking the model with the `tools` input. + +### SystemPromptMessage + +Represents system messages, usually used for setting system commands given to the model. + +```python +class SystemPromptMessage(PromptMessage): + """ + Model class for system prompt message. + """ + role: PromptMessageRole = PromptMessageRole.SYSTEM +``` + +### ToolPromptMessage + +Represents tool messages, used for conveying the results of a tool execution to the model for the next step of processing. + +```python +class ToolPromptMessage(PromptMessage): + """ + Model class for tool prompt message. + """ + role: PromptMessageRole = PromptMessageRole.TOOL + tool_call_id: str # Tool invocation ID. If OpenAI tool call is not supported, the name of the tool can also be inputted. +``` + +The base class's `content` takes in the results of tool execution. + +### PromptMessageTool + +```python +class PromptMessageTool(BaseModel): + """ + Model class for prompt message tool. + """ + name: str + description: str + parameters: dict +``` + +--- + +### LLMResult + +```python +class LLMResult(BaseModel): + """ + Model class for llm result. + """ + model: str # Actual used modele + prompt_messages: list[PromptMessage] # prompt messages + message: AssistantPromptMessage # response message + usage: LLMUsage # usage info + system_fingerprint: Optional[str] = None # request fingerprint, refer to OpenAI definition +``` + +### LLMResultChunkDelta + +In streaming returns, each iteration contains the `delta` entity. + +```python +class LLMResultChunkDelta(BaseModel): + """ + Model class for llm result chunk delta. + """ + index: int + message: AssistantPromptMessage # response message + usage: Optional[LLMUsage] = None # usage info + finish_reason: Optional[str] = None # finish reason, only the last one returns +``` + +### LLMResultChunk + +Each iteration entity in streaming returns. + +```python +class LLMResultChunk(BaseModel): + """ + Model class for llm result chunk. + """ + model: str # Actual used modele + prompt_messages: list[PromptMessage] # prompt messages + system_fingerprint: Optional[str] = None # request fingerprint, refer to OpenAI definition + delta: LLMResultChunkDelta +``` + +### LLMUsage + +```python +class LLMUsage(ModelUsage): + """ + Model class for LLM usage. + """ + prompt_tokens: int # Tokens used for prompt + prompt_unit_price: Decimal # Unit price for prompt + prompt_price_unit: Decimal # Price unit for prompt, i.e., the unit price based on how many tokens + prompt_price: Decimal # Cost for prompt + completion_tokens: int # Tokens used for response + completion_unit_price: Decimal # Unit price for response + completion_price_unit: Decimal # Price unit for response, i.e., the unit price based on how many tokens + completion_price: Decimal # Cost for response + total_tokens: int # Total number of tokens used + total_price: Decimal # Total cost + currency: str # Currency unit + latency: float # Request latency (s) +``` + +--- + +### TextEmbeddingResult + +```python +class TextEmbeddingResult(BaseModel): + """ + Model class for text embedding result. + """ + model: str # Actual model used + embeddings: list[list[float]] # List of embedding vectors, corresponding to the input texts list + usage: EmbeddingUsage # Usage information +``` + +### EmbeddingUsage + +```python +class EmbeddingUsage(ModelUsage): + """ + Model class for embedding usage. + """ + tokens: int # Number of tokens used + total_tokens: int # Total number of tokens used + unit_price: Decimal # Unit price + price_unit: Decimal # Price unit, i.e., the unit price based on how many tokens + total_price: Decimal # Total cost + currency: str # Currency unit + latency: float # Request latency (s) +``` + +--- + +### RerankResult + +```python +class RerankResult(BaseModel): + """ + Model class for rerank result. + """ + model: str # Actual model used + docs: list[RerankDocument] # Reranked document list +``` + +### RerankDocument + +```python +class RerankDocument(BaseModel): + """ + Model class for rerank document. + """ + index: int # original index + text: str + score: float +``` diff --git a/en/guides/model-configuration/load-balancing.mdx b/en/guides/model-configuration/load-balancing.mdx new file mode 100644 index 00000000..af5b1f1c --- /dev/null +++ b/en/guides/model-configuration/load-balancing.mdx @@ -0,0 +1,38 @@ +--- +title: Load Balancing +--- + + +Model rate limits are restrictions imposed by model providers on the number of times users or customers can access API services within a specified time frame. These limits help prevent API abuse or misuse, ensure fair access for all users, and control the overall load on the infrastructure. + +In enterprise-level large-scale model API calls, high concurrent requests can exceed rate limits and affect user access. Load balancing can distribute API requests across multiple API endpoints, ensuring all users receive the fastest response and the highest model invocation throughput, thereby ensuring stable business operations. + +You can enable this feature by navigating to **Model Provider -- Model List -- Configure Model Load Balancing** and adding multiple credentials (API keys) for the same model. + +![Model Load Balancing](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/9cabd37adbb8f566f55dacd4a214c84b.png) + + +Model load balancing is a paid feature. You can enable it by [subscribing to SaaS paid services](../../getting-started/cloud.md#subscription-plan) or purchasing the enterprise edition. + + +The default API key is the credential added when initially configuring the model provider. You need to click **Add Configuration** to add different API keys for the same model to use the load balancing feature properly. + +![Configuring Load Balancing](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/4652cb43949869667b4fdd1a4f28fe64.png) + +**At least one additional model credential** must be added to save and enable load balancing. + +You can also **temporarily disable** or **delete** configured credentials. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/625abfbd3ac73092fd446940d48652ae.png) + +Once configured, all models with load balancing enabled will be displayed in the model list. + +![Enabling Load Balancing](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/94125c036a277d962fdb50ada5ddf936.png) + + +By default, load balancing uses the Round-robin strategy. If the rate limit is triggered, a 1-minute cooldown period will be applied. + + +You can also configure load balancing from **Add Model**, following the same process as above. + +![Configuring Load Balancing from Add Model](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/f583fd97b2a954ab67db567af16236c0.png) diff --git a/en/guides/model-configuration/new-provider.mdx b/en/guides/model-configuration/new-provider.mdx new file mode 100644 index 00000000..2cecf721 --- /dev/null +++ b/en/guides/model-configuration/new-provider.mdx @@ -0,0 +1,195 @@ +--- +title: Adding a New Provider +--- + + +### Provider Configuration Methods + +Providers support three configuration models: + +**Predefined Model** + +This indicates that users only need to configure unified provider credentials to use the predefined models under the provider. + +**Customizable Model** + +Users need to add credentials configuration for each model. For example, Xinference supports both LLM and Text Embedding, but each model has a unique **model_uid**. If you want to connect both, you need to configure a **model_uid** for each model. + +**Fetch from Remote** + +Similar to the `predefined-model` configuration method, users only need to configure unified provider credentials, and the models are fetched from the provider using the credential information. + +For instance, with OpenAI, we can fine-tune multiple models based on gpt-turbo-3.5, all under the same **api_key**. When configured as `fetch-from-remote`, developers only need to configure a unified **api_key** to allow Dify Runtime to fetch all the developer's fine-tuned models and connect to Dify. + +These three configuration methods **can coexist**, meaning a provider can support `predefined-model` + `customizable-model` or `predefined-model` + `fetch-from-remote`, etc. This allows using predefined models and models fetched from remote with unified provider credentials, and additional custom models can be used if added. + +### Configuration Instructions + +**Terminology** + +* `module`: A `module` is a Python Package, or more colloquially, a folder containing an `__init__.py` file and other `.py` files. + +**Steps** + +Adding a new provider mainly involves several steps. Here is a brief outline to give you an overall understanding. Detailed steps will be introduced below. + +* Create a provider YAML file and write it according to the [Provider Schema](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/schema.md). +* Create provider code and implement a `class`. +* Create corresponding model type `modules` under the provider `module`, such as `llm` or `text_embedding`. +* Create same-named code files under the corresponding model `module`, such as `llm.py`, and implement a `class`. +* If there are predefined models, create same-named YAML files under the model `module`, such as `claude-2.1.yaml`, and write them according to the [AI Model Entity](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/schema.md#aimodelentity). +* Write test code to ensure functionality is available. + +#### Let's Get Started + +To add a new provider, first determine the provider's English identifier, such as `anthropic`, and create a `module` named after it in `model_providers`. + +Under this `module`, we need to prepare the provider's YAML configuration first. + +**Preparing Provider YAML** + +Taking `Anthropic` as an example, preset the basic information of the provider, supported model types, configuration methods, and credential rules. + +```YAML +provider: anthropic # Provider identifier +label: # Provider display name, can be set in en_US English and zh_Hans Chinese. If zh_Hans is not set, en_US will be used by default. + en_US: Anthropic +icon_small: # Small icon of the provider, stored in the _assets directory under the corresponding provider implementation directory, same language strategy as label + en_US: icon_s_en.png +icon_large: # Large icon of the provider, stored in the _assets directory under the corresponding provider implementation directory, same language strategy as label + en_US: icon_l_en.png +supported_model_types: # Supported model types, Anthropic only supports LLM +- llm +configurate_methods: # Supported configuration methods, Anthropic only supports predefined models +- predefined-model +provider_credential_schema: # Provider credential rules, since Anthropic only supports predefined models, unified provider credential rules need to be defined + credential_form_schemas: # Credential form item list + - variable: anthropic_api_key # Credential parameter variable name + label: # Display name + en_US: API Key + type: secret-input # Form type, secret-input here represents an encrypted information input box, only displaying masked information when editing. + required: true # Whether it is required + placeholder: # PlaceHolder information + zh_Hans: 在此输入你的 API Key + en_US: Enter your API Key + - variable: anthropic_api_url + label: + en_US: API URL + type: text-input # Form type, text-input here represents a text input box + required: false + placeholder: + zh_Hans: 在此输入你的 API URL + en_US: Enter your API URL +``` + +If the connected provider offers customizable models, such as `OpenAI` which provides fine-tuned models, we need to add [`model_credential_schema`](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/schema.md). Taking `OpenAI` as an example: + +```yaml +model_credential_schema: + model: # Fine-tuned model name + label: + en_US: Model Name + zh_Hans: 模型名称 + placeholder: + en_US: Enter your model name + zh_Hans: 输入模型名称 + credential_form_schemas: + - variable: openai_api_key + label: + en_US: API Key + type: secret-input + required: true + placeholder: + zh_Hans: 在此输入你的 API Key + en_US: Enter your API Key + - variable: openai_organization + label: + zh_Hans: 组织 ID + en_US: Organization + type: text-input + required: false + placeholder: + zh_Hans: 在此输入你的组织 ID + en_US: Enter your Organization ID + - variable: openai_api_base + label: + zh_Hans: API Base + en_US: API Base + type: text-input + required: false + placeholder: + zh_Hans: 在此输入你的 API Base + en_US: Enter your API Base +``` + +You can also refer to the [YAML configuration information](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/schema.md) in the directories of other providers under the `model_providers` directory. + +**Implement Provider Code** + +We need to create a Python file with the same name under `model_providers`, such as `anthropic.py`, and implement a `class` that inherits from the `__base.provider.Provider` base class, such as `AnthropicProvider`. + +**Custom Model Providers** + +For providers like Xinference that offer custom models, this step can be skipped. Just create an empty `XinferenceProvider` class and implement an empty `validate_provider_credentials` method. This method will not actually be used and is only to avoid abstract class instantiation errors. + +```python +class XinferenceProvider(Provider): + def validate_provider_credentials(self, credentials: dict) -> None: + pass +``` + +**Predefined Model Providers** + +Providers need to inherit from the `__base.model_provider.ModelProvider` base class and implement the `validate_provider_credentials` method to validate the provider's unified credentials. You can refer to [AnthropicProvider](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/model_providers/anthropic/anthropic.py). + +```python +def validate_provider_credentials(self, credentials: dict) -> None: + """ + Validate provider credentials + You can choose any validate_credentials method of model type or implement validate method by yourself, + such as: get model list api + + if validate failed, raise exception + + :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. + """ +``` + +You can also reserve the `validate_provider_credentials` implementation first and directly reuse it after implementing the model credential validation method. + +**Adding Models** + +[**Adding Predefined Models**](https://docs.dify.ai/v/zh-hans/guides/model-configuration/predefined-model)**👈🏻** + +For predefined models, we can connect them by simply defining a YAML file and implementing the calling code. + +[**Adding Custom Models**](https://docs.dify.ai/v/zh-hans/guides/model-configuration/customizable-model) **👈🏻** + +For custom models, we only need to implement the calling code to connect them, but the parameters they handle may be more complex. + +*** + +#### Testing + +To ensure the availability of the connected provider/model, each method written needs to have corresponding integration test code written in the `tests` directory. + +Taking `Anthropic` as an example. + +Before writing test code, you need to add the credential environment variables required for testing the provider in `.env.example`, such as: `ANTHROPIC_API_KEY`. + +Before executing, copy `.env.example` to `.env` and then execute. + +**Writing Test Code** + +Create a `module` with the same name as the provider under the `tests` directory: `anthropic`, and continue to create `test_provider.py` and corresponding model type test py files in this module, as shown below: + +```shell +. +├── __init__.py +├── anthropic +│   ├── __init__.py +│   ├── test_llm.py # LLM Test +│   └── test_provider.py # Provider Test +``` + +Write test code for various situations of the implemented code above, and after passing the tests, submit the code. diff --git a/en/guides/model-configuration/predefined-model.mdx b/en/guides/model-configuration/predefined-model.mdx new file mode 100644 index 00000000..5b1b0161 --- /dev/null +++ b/en/guides/model-configuration/predefined-model.mdx @@ -0,0 +1,200 @@ +--- +title: Predefined Model Integration +--- + + +After completing the supplier integration, the next step is to integrate the models under the supplier. + +First, we need to determine the type of model to be integrated and create the corresponding model type `module` in the directory of the respective supplier. + +The currently supported model types are as follows: + +* `llm` Text Generation Model +* `text_embedding` Text Embedding Model +* `rerank` Rerank Model +* `speech2text` Speech to Text +* `tts` Text to Speech +* `moderation` Moderation + +Taking `Anthropic` as an example, `Anthropic` only supports LLM, so we create a `module` named `llm` in `model_providers.anthropic`. + +For predefined models, we first need to create a YAML file named after the model under the `llm` `module`, such as: `claude-2.1.yaml`. + +#### Preparing the Model YAML + +```yaml +model: claude-2.1 # Model identifier +# Model display name, can be set in en_US English and zh_Hans Chinese. If zh_Hans is not set, it will default to en_US. +# You can also not set a label, in which case the model identifier will be used. +label: + en_US: claude-2.1 +model_type: llm # Model type, claude-2.1 is an LLM +features: # Supported features, agent-thought supports Agent reasoning, vision supports image understanding +- agent-thought +model_properties: # Model properties + mode: chat # LLM mode, complete for text completion model, chat for dialogue model + context_size: 200000 # Maximum context size supported +parameter_rules: # Model invocation parameter rules, only LLM needs to provide +- name: temperature # Invocation parameter variable name + # There are 5 preset variable content configuration templates: temperature/top_p/max_tokens/presence_penalty/frequency_penalty + # You can set the template variable name directly in use_template, and it will use the default configuration in entities.defaults.PARAMETER_RULE_TEMPLATE + # If additional configuration parameters are set, they will override the default configuration + use_template: temperature +- name: top_p + use_template: top_p +- name: top_k + label: # Invocation parameter display name + zh_Hans: 取样数量 + en_US: Top k + type: int # Parameter type, supports float/int/string/boolean + help: # Help information, describes the parameter's function + zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 + en_US: Only sample from the top K options for each subsequent token. + required: false # Whether it is required, can be omitted +- name: max_tokens_to_sample + use_template: max_tokens + default: 4096 # Default parameter value + min: 1 # Minimum parameter value, only applicable to float/int + max: 4096 # Maximum parameter value, only applicable to float/int +pricing: # Pricing information + input: '8.00' # Input unit price, i.e., Prompt unit price + output: '24.00' # Output unit price, i.e., return content unit price + unit: '0.000001' # Price unit, the above price is per 100K + currency: USD # Price currency +``` + +It is recommended to prepare all model configurations before starting the implementation of the model code. + +Similarly, you can refer to the YAML configuration information in the directories of other suppliers under the `model_providers` directory. The complete YAML rules can be found in: Schema[^1]. + +#### Implementing Model Invocation Code + +Next, create a Python file with the same name `llm.py` under the `llm` `module` to write the implementation code. + +Create an Anthropic LLM class in `llm.py`, which we will name `AnthropicLargeLanguageModel` (name can be arbitrary), inheriting from the `__base.large_language_model.LargeLanguageModel` base class, and implement the following methods: + +* LLM Invocation + + Implement the core method for LLM invocation, supporting both streaming and synchronous responses. + + ```python + def _invoke(self, model: str, credentials: dict, + prompt_messages: list[PromptMessage], model_parameters: dict, + tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, + stream: bool = True, user: Optional[str] = None) \ + -> Union[LLMResult, Generator]: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param model_parameters: model parameters + :param tools: tools for tool calling + :param stop: stop words + :param stream: is stream response + :param user: unique user id + :return: full response or stream response chunk generator result + """ + ``` + + When implementing, note to use two functions to return data, one for handling synchronous responses and one for streaming responses. Since Python recognizes functions containing the `yield` keyword as generator functions, returning a fixed data type of `Generator`, synchronous and streaming responses need to be implemented separately, like this (note the example below uses simplified parameters, actual implementation should follow the parameter list above): + + ```python + def _invoke(self, stream: bool, **kwargs) \ + -> Union[LLMResult, Generator]: + if stream: + return self._handle_stream_response(**kwargs) + return self._handle_sync_response(**kwargs) + + def _handle_stream_response(self, **kwargs) -> Generator: + for chunk in response: + yield chunk + def _handle_sync_response(self, **kwargs) -> LLMResult: + return LLMResult(**response) + ``` +* Precompute Input Tokens + + If the model does not provide a precompute tokens interface, return 0 directly. + + ```python + def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], + tools: Optional[list[PromptMessageTool]] = None) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param tools: tools for tool calling + :return: + """ + ``` +* Model Credentials Validation + + Similar to supplier credentials validation, this validates the credentials for a single model. + + ```python + def validate_credentials(self, model: str, credentials: dict) -> None: + """ + Validate model credentials + + :param model: model name + :param credentials: model credentials + :return: + """ + ``` +* Invocation Error Mapping Table + + When a model invocation error occurs, it needs to be mapped to the `InvokeError` type specified by Runtime, facilitating Dify to handle different errors differently. + + Runtime Errors: + + * `InvokeConnectionError` Invocation connection error + * `InvokeServerUnavailableError` Invocation service unavailable + * `InvokeRateLimitError` Invocation rate limit reached + * `InvokeAuthorizationError` Invocation authorization failed + * `InvokeBadRequestError` Invocation parameter error + + ```python + @property + def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + """ + Map model invoke error to unified error + The key is the error type thrown to the caller + The value is the error type thrown by the model, + which needs to be converted into a unified error type for the caller. + + :return: Invoke error mapping + """ + ``` + +For interface method descriptions, see: [Interfaces](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/interfaces.md), and for specific implementation, refer to: [llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py). + +[^1]: #### Provider + + * `provider` (string) Supplier identifier, e.g., `openai` + * `label` (object) Supplier display name, i18n, can be set in `en_US` English and `zh_Hans` Chinese + * `zh_Hans` (string) [optional] Chinese label name, if `zh_Hans` is not set, it will default to `en_US`. + * `en_US` (string) English label name + * `description` (object) [optional] Supplier description, i18n + * `zh_Hans` (string) [optional] Chinese description + * `en_US` (string) English description + * `icon_small` (string) [optional] Supplier small icon, stored in the `_assets` directory under the respective supplier implementation directory, follows the same language strategy as `label` + * `zh_Hans` (string) [optional] Chinese icon + * `en_US` (string) English icon + * `icon_large` (string) [optional] Supplier large icon, stored in the `_assets` directory under the respective supplier implementation directory, follows the same language strategy as `label` + * `zh_Hans` (string) [optional] Chinese icon + * `en_US` (string) English icon + * `background` (string) [optional] Background color value, e.g., #FFFFFF, if empty, the default color value will be displayed on the front end. + * `help` (object) [optional] Help information + * `title` (object) Help title, i18n + * `zh_Hans` (string) [optional] Chinese title + * `en_US` (string) English title + * `url` (object) Help link, i18n + * `zh_Hans` (string) [optional] Chinese link + * `en_US` (string) English link + * `supported_model_types` (array[ModelType]) Supported model types + * `configurate_methods` (array[ConfigurateMethod]) Configuration methods + * `provider_credential_schema` (ProviderCredentialSchema) Supplier credential schema + * `model_credential_schema` (ModelCredentialSchema) Model credential schema \ No newline at end of file diff --git a/en/guides/model-configuration/readme.mdx b/en/guides/model-configuration/readme.mdx new file mode 100644 index 00000000..4f592cb3 --- /dev/null +++ b/en/guides/model-configuration/readme.mdx @@ -0,0 +1,76 @@ +--- +title: Model +description: Learn about the Different Models Supported by Dify. +--- + + +Dify is a development platform for AI application based on LLM Apps, when you are using Dify for the first time, you need to go to **Settings --> Model Providers** to add and configure the LLM you are going to use. + +![Settings - Model Provider](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/941524058c800a06d39290c48d14673b.png) + +Dify supports major model providers like OpenAI's GPT series and Anthropic's Claude series. Each model's capabilities and parameters differ, so select a model provider that suits your application's needs. **Obtain the API key from the model provider's official website before using it in Dify.** + +## Model Types in Dify + +Dify classifies models into 4 types, each for different uses: + +1. **System Inference Models:** Used in applications for tasks like chat, name generation, and suggesting follow-up questions. + + > Providers include [OpenAI](https://platform.openai.com/account/api-keys)、[Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service/)、[Anthropic](https://console.anthropic.com/account/keys)、Hugging Face Hub、Replicate、Xinference、OpenLLM、[iFLYTEK SPARK](https://www.xfyun.cn/solutions/xinghuoAPI)、[WENXINYIYAN](https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application)、[TONGYI](https://dashscope.console.aliyun.com/api-key\_management?spm=a2c4g.11186623.0.0.3bbc424dxZms9k)、[Minimax](https://api.minimax.chat/user-center/basic-information/interface-key)、ZHIPU(ChatGLM)、[Ollama](https://docs.dify.ai/tutorials/model-configuration/ollama)、[LocalAI](https://github.com/mudler/LocalAI)、[GPUStack](https://github.com/gpustack/gpustack). +2. **Embedding Models:** Employed for embedding segmented documents in knowledge and processing user queries in applications. + + > Providers include OpenAI, ZHIPU (ChatGLM), Jina AI([Jina Embeddings](https://jina.ai/embeddings/)). +3. [**Rerank Models**](https://docs.dify.ai/advanced/retrieval-augment/rerank)**:** Enhance search capabilities in LLMs. + + > Providers include Cohere, Jina AI([Jina Reranker](https://jina.ai/reranker)). +4. **Speech-to-Text Models:** Convert spoken words to text in conversational applications. + + > Provider: OpenAI. + +Dify plans to add more LLM providers as technology and user needs evolve. + +## Hosted Model Trial Service + +Dify offers trial quotas for cloud service users to experiment with different models. Set up your model provider before the trial ends to ensure uninterrupted application use. + +* OpenAI Hosted Model Trial: Includes 200 invocations for models like GPT3.5-turbo, GPT3.5-turbo-16k, text-davinci-003 models. + +## Setting the Default Model + +Dify automatically selects the default model based on usage. Configure this in `Settings > Model Provider`. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/c5ac5f32deb020a8aae46045d3ee9c8d.png) + +## Model Integration Settings + +Choose your model in Dify's `Settings > Model Provider`. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/b55cb7d0dcff7ef14cdf6e6aca207790.png) + +Model providers fall into two categories: + +1. Proprietary Models: Developed by providers such as OpenAI and Anthropic. +2. Hosted Models: Offer third-party models, like Hugging Face and Replicate. + +Integration methods differ between these categories. + +**Proprietary Model Providers:** Dify connects to all models from an integrated provider. Set the provider's API key in Dify to integrate. + + +Dify uses [PKCS1\_OAEP](https://pycryptodome.readthedocs.io/en/latest/src/cipher/oaep.html) encryption to protect your API keys. Each user (tenant) has a unique key pair for encryption, ensuring your API keys remain confidential. + + +**Hosted Model Providers:** Integrate third-party models individually. + +Specific integration methods are not detailed here. + +* [Hugging Face](../../development/models-integration/hugging-face.md) +* [Replicate](../../development/models-integration/replicate.md) +* [Xinference](../../development/models-integration/xinference.md) +* [OpenLLM](../../development/models-integration/openllm.md) + +## Using Models + +Once configured, these models are ready for application use. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/model-configuration/6e74424124cfcfe224d3c083f90b54d2.png) diff --git a/en/guides/model-configuration/schema.mdx b/en/guides/model-configuration/schema.mdx new file mode 100644 index 00000000..984f2ede --- /dev/null +++ b/en/guides/model-configuration/schema.mdx @@ -0,0 +1,209 @@ +--- +title: Configuration Rules +--- + + +- Provider rules are based on the [Provider](#Provider) entity. +- Model rules are based on the [AIModelEntity](#AIModelEntity) entity. + +> All entities mentioned below are based on `Pydantic BaseModel` and can be found in the `entities` module. + +### Provider + +- `provider` (string) Provider identifier, e.g., `openai` +- `label` (object) Provider display name, i18n, with `en_US` English and `zh_Hans` Chinese language settings + - `zh_Hans` (string) [optional] Chinese label name, if `zh_Hans` is not set, `en_US` will be used by default. + - `en_US` (string) English label name +- `description` (object) Provider description, i18n + - `zh_Hans` (string) [optional] Chinese description + - `en_US` (string) English description +- `icon_small` (string) [optional] Small provider ICON, stored in the `_assets` directory under the corresponding provider implementation directory, with the same language strategy as `label` + - `zh_Hans` (string) Chinese ICON + - `en_US` (string) English ICON +- `icon_large` (string) [optional] Large provider ICON, stored in the `_assets` directory under the corresponding provider implementation directory, with the same language strategy as `label` + - `zh_Hans` (string) Chinese ICON + - `en_US` (string) English ICON +- `background` (string) [optional] Background color value, e.g., #FFFFFF, if empty, the default frontend color value will be displayed. +- `help` (object) [optional] help information + - `title` (object) help title, i18n + - `zh_Hans` (string) [optional] Chinese title + - `en_US` (string) English title + - `url` (object) help link, i18n + - `zh_Hans` (string) [optional] Chinese link + - `en_US` (string) English link +- `supported_model_types` (array[[ModelType](#ModelType)]) Supported model types +- `configurate_methods` (array[[ConfigurateMethod](#ConfigurateMethod)]) Configuration methods +- `provider_credential_schema` ([ProviderCredentialSchema](#ProviderCredentialSchema)) Provider credential specification +- `model_credential_schema` ([ModelCredentialSchema](#ModelCredentialSchema)) Model credential specification + +### AIModelEntity + +- `model` (string) Model identifier, e.g., `gpt-3.5-turbo` +- `label` (object) [optional] Model display name, i18n, with `en_US` English and `zh_Hans` Chinese language settings + - `zh_Hans` (string) [optional] Chinese label name + - `en_US` (string) English label name +- `model_type` ([ModelType](#ModelType)) Model type +- `features` (array[[ModelFeature](#ModelFeature)]) [optional] Supported feature list +- `model_properties` (object) Model properties + - `mode` ([LLMMode](#LLMMode)) Mode (available for model type `llm`) + - `context_size` (int) Context size (available for model types `llm`, `text-embedding`) + - `max_chunks` (int) Maximum number of chunks (available for model types `text-embedding`, `moderation`) + - `file_upload_limit` (int) Maximum file upload limit, in MB (available for model type `speech2text`) + - `supported_file_extensions` (string) Supported file extension formats, e.g., mp3, mp4 (available for model type `speech2text`) + - `default_voice` (string) default voice, e.g.:alloy,echo,fable,onyx,nova,shimmer(available for model type `tts`) + - `voices` (list) List of available voice.(available for model type `tts`) + - `mode` (string) voice model.(available for model type `tts`) + - `name` (string) voice model display name.(available for model type `tts`) + - `language` (string) the voice model supports languages.(available for model type `tts`) + - `word_limit` (int) Single conversion word limit, paragraphwise by default(available for model type `tts`) + - `audio_type` (string) Support audio file extension format, e.g.:mp3,wav(available for model type `tts`) + - `max_workers` (int) Number of concurrent workers supporting text and audio conversion(available for model type`tts`) + - `max_characters_per_chunk` (int) Maximum characters per chunk (available for model type `moderation`) +- `parameter_rules` (array[[ParameterRule](#ParameterRule)]) [optional] Model invocation parameter rules +- `pricing` ([PriceConfig](#PriceConfig)) [optional] Pricing information +- `deprecated` (bool) Whether deprecated. If deprecated, the model will no longer be displayed in the list, but those already configured can continue to be used. Default False. + +### ModelType + +- `llm` Text generation model +- `text-embedding` Text Embedding model +- `rerank` Rerank model +- `speech2text` Speech to text +- `tts` Text to speech +- `moderation` Moderation + +### ConfigurateMethod + +- `predefined-model` Predefined model + + Indicates that users can use the predefined models under the provider by configuring the unified provider credentials. +- `customizable-model` Customizable model + + Users need to add credential configuration for each model. + +- `fetch-from-remote` Fetch from remote + + Consistent with the `predefined-model` configuration method, only unified provider credentials need to be configured, and models are obtained from the provider through credential information. + +### ModelFeature + +- `agent-thought` Agent reasoning, generally over 70B with thought chain capability. +- `vision` Vision, i.e., image understanding. +- `tool-call` +- `multi-tool-call` +- `stream-tool-call` + +### FetchFrom + +- `predefined-model` Predefined model +- `fetch-from-remote` Remote model + +### LLMMode + +- `completion` Text completion +- `chat` Dialogue + +### ParameterRule + +- `name` (string) Actual model invocation parameter name +- `use_template` (string) [optional] Using template + + By default, 5 variable content configuration templates are preset: + + - `temperature` + - `top_p` + - `frequency_penalty` + - `presence_penalty` + - `max_tokens` + + In use_template, you can directly set the template variable name, which will use the default configuration in entities.defaults.PARAMETER_RULE_TEMPLATE + No need to set any parameters other than `name` and `use_template`. If additional configuration parameters are set, they will override the default configuration. + Refer to `openai/llm/gpt-3.5-turbo.yaml`. + +- `label` (object) [optional] Label, i18n + + - `zh_Hans`(string) [optional] Chinese label name + - `en_US` (string) English label name + +- `type`(string) [optional] Parameter type + + - `int` Integer + - `float` Float + - `string` String + - `boolean` Boolean + +- `help` (string) [optional] Help information + + - `zh_Hans` (string) [optional] Chinese help information + - `en_US` (string) English help information + +- `required` (bool) Required, default False. + +- `default`(int/float/string/bool) [optional] Default value + +- `min`(int/float) [optional] Minimum value, applicable only to numeric types + +- `max`(int/float) [optional] Maximum value, applicable only to numeric types + +- `precision`(int) [optional] Precision, number of decimal places to keep, applicable only to numeric types + +- `options` (array[string]) [optional] Dropdown option values, applicable only when `type` is `string`, if not set or null, option values are not restricted + +### PriceConfig + +- `input` (float) Input price, i.e., Prompt price +- `output` (float) Output price, i.e., returned content price +- `unit` (float) Pricing unit, e.g., if the price is meausred in 1M tokens, the corresponding token amount for the unit price is `0.000001`. +- `currency` (string) Currency unit + +### ProviderCredentialSchema + +- `credential_form_schemas` (array[[CredentialFormSchema](#CredentialFormSchema)]) Credential form standard + +### ModelCredentialSchema + +- `model` (object) Model identifier, variable name defaults to `model` + - `label` (object) Model form item display name + - `en_US` (string) English + - `zh_Hans`(string) [optional] Chinese + - `placeholder` (object) Model prompt content + - `en_US`(string) English + - `zh_Hans`(string) [optional] Chinese +- `credential_form_schemas` (array[[CredentialFormSchema](#CredentialFormSchema)]) Credential form standard + +### CredentialFormSchema + +- `variable` (string) Form item variable name +- `label` (object) Form item label name + - `en_US`(string) English + - `zh_Hans` (string) [optional] Chinese +- `type` ([FormType](#FormType)) Form item type +- `required` (bool) Whether required +- `default`(string) Default value +- `options` (array[[FormOption](#FormOption)]) Specific property of form items of type `select` or `radio`, defining dropdown content +- `placeholder`(object) Specific property of form items of type `text-input`, placeholder content + - `en_US`(string) English + - `zh_Hans` (string) [optional] Chinese +- `max_length` (int) Specific property of form items of type `text-input`, defining maximum input length, 0 for no limit. +- `show_on` (array[[FormShowOnObject](#FormShowOnObject)]) Displayed when other form item values meet certain conditions, displayed always if empty. + +### FormType + +- `text-input` Text input component +- `secret-input` Password input component +- `select` Single-choice dropdown +- `radio` Radio component +- `switch` Switch component, only supports `true` and `false` values + +### FormOption + +- `label` (object) Label + - `en_US`(string) English + - `zh_Hans`(string) [optional] Chinese +- `value` (string) Dropdown option value +- `show_on` (array[[FormShowOnObject](#FormShowOnObject)]) Displayed when other form item values meet certain conditions, displayed always if empty. + +### FormShowOnObject + +- `variable` (string) Variable name of other form items +- `value` (string) Variable value of other form items diff --git a/en/guides/monitoring/README.mdx b/en/guides/monitoring/README.mdx new file mode 100644 index 00000000..cc19eccb --- /dev/null +++ b/en/guides/monitoring/README.mdx @@ -0,0 +1,8 @@ +--- +title: Introduction +--- + + +You can monitor and track the performance of your application in a production environment within the **Overview** section. In the data analytics dashboard, you can analyze various metrics such as usage costs, latency, user feedback, and performance in the production environment. By continuously debugging and iterating, you can continually improve your application. + +![Guide](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/2aab0029f58b0c43faa8f45349555cdc.png) diff --git a/en/guides/monitoring/analysis.mdx b/en/guides/monitoring/analysis.mdx new file mode 100644 index 00000000..7b0d0949 --- /dev/null +++ b/en/guides/monitoring/analysis.mdx @@ -0,0 +1,38 @@ +--- +title: Data Analysis +--- + + +The **Overview -- Data Analysis** section displays metrics such as usage, active users, and LLM (Language Learning Model) invocation costs. This allows you to continuously improve the effectiveness, engagement, and cost-efficiency of your application operations. We will gradually provide more useful visualization capabilities, so please let us know what you need. + +![Overview—Data Analysis](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/273fbe372440ad8da870e6524854fa97.png) + +*** + +**Total Messages** + +Reflects the total number of daily interactions between users and AI. Each time the AI answers a user's question, it counts as one message. Prompt orchestration and debugging sessions are not included. + +**Active Users** + +The number of unique users who have had effective interactions with the AI, defined as having more than one question-and-answer exchange. Prompt orchestration and debugging sessions are not included. + +**Average Session Interactions** + +Reflects the number of continuous interactions per session user. For example, if a user has a 10-round Q\&A with the AI, it is counted as 10. This metric reflects user engagement. It is available only for conversational applications. + +**Token Output Speed** + +The number of tokens output per second, indirectly reflecting the model's generation rate and the application's usage frequency. + +**User Satisfaction Rate** + +The number of likes per 1,000 messages, reflecting the proportion of users who are very satisfied with the answers. + +**Token Usage** + +Reflects the daily token expenditure for language model requests by the application, useful for cost control. + +**Total Conversations** + +Daily AI conversation count; each new conversation session counts as one. A single conversation session may contain multiple message exchanges; messages related to prompt engineering and debugging are not included. diff --git a/en/guides/monitoring/integrate-external-ops-tools/README.mdx b/en/guides/monitoring/integrate-external-ops-tools/README.mdx new file mode 100644 index 00000000..c34fc537 --- /dev/null +++ b/en/guides/monitoring/integrate-external-ops-tools/README.mdx @@ -0,0 +1,34 @@ +--- +title: Integrate External Ops Tools +--- + + +### Why LLMOps Tools Are Necessary + +While LLMs (Large Language Models) possess exceptional reasoning and text generation capabilities, their internal workings are still not fully understood, presenting challenges for developing LLM-based applications. For instance: + +* Evaluating output **quality** +* Assessing inference **costs** +* Measuring model response **latency** +* **Debugging complexity** introduced by chain calls, agents, and tools +* Understanding complex user intents + +Tools like LangSmith and Langfuse, known as LLMOps tools, provide comprehensive tracking and deep evaluation capabilities for LLM applications, offering developers complete lifecycle support from prototyping to production and operations. + +* #### Prototyping Phase + +In the prototyping phase, LLM applications typically involve rapid experimentation with prompt testing, model selection, RAG (Retrieval-Augmented Generation) strategies, and other parameter combinations. Quickly understanding the model's execution performance is crucial in this stage. Integrating Langfuse allows tracking of every step of Dify application execution, providing clear visibility and debugging information, enabling developers to quickly pinpoint issues and reduce debugging time. + +* **Testing Phase** + +In the testing phase, data collection continues to improve and enhance performance. LangSmith can add runs as examples to the dataset, extending test coverage to real-world scenarios. This is a key advantage of having logging and evaluation/testing systems on the same platform. + +* #### Production Phase + +In the production environment, development teams need to carefully monitor key data points, add benchmark datasets, perform manual annotations, and deeply analyze operational results. Especially during large-scale application usage, operations and data teams must continuously monitor application costs and performance, optimizing both the model and application performance. + +### Integrating Dify with Ops Tools + +When using Dify Workflow to orchestrate LLM applications, it typically involves a series of nodes and logic with high complexity. + +Integrating Dify with external Ops tools helps to break the "black box" issue often faced in application orchestration. Developers can simply configure the platform to track data and metrics throughout the application lifecycle, easily assessing the quality, performance, and cost of LLM applications created on Dify. diff --git a/en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.mdx b/en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.mdx new file mode 100644 index 00000000..e705972b --- /dev/null +++ b/en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.mdx @@ -0,0 +1,626 @@ +--- +title: Integrate Langfuse +--- + + +### What is Langfuse + +Langfuse is an open-source LLM engineering platform that helps teams collaborate on debugging, analyzing, and iterating their applications. + + +Introduction to Langfuse: [https://langfuse.com/](https://langfuse.com/) + + +*** + +### How to Configure Langfuse + +1. Register and log in to Langfuse on the [official website](https://langfuse.com/) +2. Create a project in Langfuse. After logging in, click **New** on the homepage to create your own project. The **project** will be used to associate with **applications** in Dify for data monitoring. + +
+ + +
+ +*** + +### List of monitoring data + +#### Trace the information of Workflow and Chatflow + +**Tracing workflow and chatflow** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
WorkflowLangFuse Trace
workflow\_app\_log\_id/workflow\_run\_idid
user\_session\_iduser\_id
workflow\_{id}name
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
Model token consumptionusage
metadatametadata
errorlevel
errorstatus\_message
\[workflow]tags
\["message", conversation\_mode]session\_id
conversion\_idparent\_observation\_id
+**Workflow Trace Info** + +* workflow\_id - Unique ID of Workflow +* conversation\_id - Conversation ID +* workflow\_run\_id - Workflow ID of this runtime +* tenant\_id - Tenant ID +* elapsed\_time - Elapsed time at this runtime +* status - Runtime status +* version - Workflow version +* total\_tokens - Total token used at this runtime +* file\_list - List of files processed +* triggered\_from - Source that triggered this runtime +* workflow\_run\_inputs - Input of this workflow +* workflow\_run\_outputs - Output of this workflow +* error - Error Message +* query - Queries used at runtime +* workflow\_app\_log\_id - Workflow Application Log ID +* message\_id - Relevant Message ID +* start\_time - Start time of this runtime +* end\_time - End time of this runtime +* workflow node executions - Workflow node runtime information +* Metadata + * workflow\_id - Unique ID of Workflow + * conversation\_id - Conversation ID + * workflow\_run\_id - Workflow ID of this runtime + * tenant\_id - Tenant ID + * elapsed\_time - Elapsed time at this runtime + * status - Operational state + * version - Workflow version + * total\_tokens - Total token used at this runtime + * file\_list - List of files processed + * triggered\_from - Source that triggered this runtime + +#### Message Trace Info + +**For trace llm conversation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MessageLangFuse Generation/Trace
message\_idid
user\_session\_iduser\_id
message\_{id}name
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
Model token consumptionusage
metadatametadata
errorlevel
errorstatus\_message
\["message", conversation\_mode]tags
conversation\_idsession\_id
conversion\_idparent\_observation\_id
+**Message Trace Info** + +* message\_id - Message ID +* message\_data - Message data +* user\_session\_id - Session ID for user +* conversation\_model - Conversation model +* message\_tokens - Message tokens +* answer\_tokens - Answer Tokens +* total\_tokens - Total Tokens from Message and Answer +* error - Error Message +* inputs - Input data +* outputs - Output data +* file\_list - List of files processed +* start\_time - Start time +* end\_time - End time +* message\_file\_data - Message of relevant file data +* conversation\_mode - Conversation mode +* Metadata + * conversation\_id - Conversation ID + * ls\_provider - Model provider + * ls\_model\_name - Model ID + * status - Message status + * from\_end\_user\_id - Sending user's ID + * from\_account\_id - Sending account's ID + * agent\_based - Whether agent based + * workflow\_run\_id - Workflow ID of this runtime + * from\_source - Message source + * message\_id - Message ID + +#### Moderation Trace Information + +**Used to track conversation moderation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModerationLangFuse Generation/Trace
user\_iduser\_id
moderationname
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
metadatametadata
\[moderation]tags
message\_idparent\_observation\_id
+**Message Trace Info** + +* message\_id - Message ID +* user\_id - user ID +* workflow\_app\_log\_id workflow\_app\_log\_id +* inputs - Input data for review +* message\_data - Message Data +* flagged - Whether it is flagged for attention +* action - Specific actions to implement +* preset\_response - Preset response +* start\_time - Start time of review +* end\_time - End time of review +* Metadata + * message\_id - Message ID + * action - Specific actions to implement + * preset\_response - Preset response + +#### Suggested Question Trace Information + +**Used to track suggested questions** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Suggested QuestionLangFuse Generation/Trace
user\_iduser\_id
suggested\_questionname
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
metadatametadata
\[suggested\_question]tags
message\_idparent\_observation\_id
+**Message Trace Info** + +* message\_id - Message ID +* message\_data - Message data +* inputs - Input data +* outputs - Output data +* start\_time - Start time +* end\_time - End time +* total\_tokens - Total tokens +* status - Message Status +* error - Error Message +* from\_account\_id - Sending account ID +* agent\_based - Whether agent based +* from\_source - Message source +* model\_provider - Model provider +* model\_id - Model ID +* suggested\_question - Suggested question +* level - Status level +* status\_message - Message status +* Metadata + * message\_id - Message ID + * ls\_provider - Model Provider + * ls\_model\_name - Model ID + * status - Message status + * from\_end\_user\_id - Sending user's ID + * from\_account\_id - Sending Account ID + * workflow\_run\_id - Workflow ID of this runtime + * from\_source - Message source + +#### Dataset Retrieval Trace Information + +**Used to track knowledge base retrieval** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Dataset RetrievalLangFuse Generation/Trace
user\_iduser\_id
dataset\_retrievalname
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
metadatametadata
\[dataset\_retrieval]tags
message\_idparent\_observation\_id
+**Dataset Retrieval Trace Info** + +* message\_id - Message ID +* inputs - Input Message +* documents - Document data +* start\_time - Start time +* end\_time - End time +* message\_data - Message data +* Metadata + * message\_id - Message ID + * ls\_provider - Model Provider + * ls\_model\_name - Model ID + * status - Model status + * from\_end\_user\_id - Sending user's ID + * from\_account\_id - Sending account's ID + * agent\_based - Whether agent based + * workflow\_run\_id - Workflow ID of this runtime + * from\_source - Message Source + +#### Tool Trace Information + +**Used to track tool invocation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ToolLangFuse Generation/Trace
user\_iduser\_id
tool\_namename
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
metadatametadata
\["tool", tool\_name]tags
message\_idparent\_observation\_id
+**Tool Trace Info** + +* message\_id - Message ID +* tool\_name - Tool Name +* start\_time - Start time +* end\_time - End time +* tool\_inputs - Tool inputs +* tool\_outputs - Tool outputs +* message\_data - Message data +* error - Error Message,if exist +* inputs - Input of Message +* outputs - Output of Message +* tool\_config - Tool config +* time\_cost - Time cost +* tool\_parameters - Tool Parameters +* file\_url - URL of relevant files +* Metadata + * message\_id - Message ID + * tool\_name - Tool Name + * tool\_inputs - Tool inputs + * tool\_outputs - Tool outputs + * tool\_config - Tool config + * time\_cost - Time. cost + * error - Error Message + * tool\_parameters - Tool parameters + * message\_file\_id - Message file ID + * created\_by\_role - Created by role + * created\_user\_id - Created user ID + +#### Generate Name Trace + +**Used to track conversation title generation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Generate NameLangFuse Generation/Trace
user\_iduser\_id
generate\_namename
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
metadatametadata
\[generate\_name]tags
+**Generate Name Trace Info** + +* conversation\_id - Conversation ID +* inputs - Input data +* outputs - Generated session name +* start\_time - Start time +* end\_time - End time +* tenant\_id - Tenant ID +* Metadata + * conversation\_id - Conversation ID + * tenant\_id - Tenant ID diff --git a/en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.mdx b/en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.mdx new file mode 100644 index 00000000..e34d1879 --- /dev/null +++ b/en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.mdx @@ -0,0 +1,625 @@ +--- +title: Integrate LangSmith +--- + + +### What is LangSmith + +LangSmith is a platform for building production-grade LLM applications. It is used for developing, collaborating, testing, deploying, and monitoring LLM applications. + + +For more details, please refer to [LangSmith](https://www.langchain.com/langsmith). + + +*** + +### How to Configure LangSmith + +#### 1. Register/Login to [LangSmith](https://www.langchain.com/langsmith) + +#### 2. Create a Project + +Create a project in LangSmith. After logging in, click **New Project** on the homepage to create your own project. The **project** will be used to associate with **applications** in Dify for data monitoring. + +![Create a project in LangSmith](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/58e20105fcc0771ca2431e8e5dcc42d3.png) + +Once created, you can view all created projects in the Projects section. + +![View created projects in LangSmith](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/642c0ff7edfdfe77fba43aa22cc3fa71.png) + +#### 3. Create Project Credentials + +Find the project settings **Settings** in the left sidebar. + +![Project settings](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/c49a1fc769215193928ff0d880422f89.png) + +Click **Create API Key** to create project credentials. + +![Create a project API Key](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/7082286b0d12af4bc0c84d9a3acf8b1b.png) + +Select **Personal Access Token** for subsequent API authentication. + +![Create an API Key](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/75a69bd4dd02f0ffc0313589ae12fb36.png) + +Copy and save the created API key. + +![Copy API Key](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/723e96a13e8f722d6df714b11ffd0bb1.png) + +#### 4. Integrating LangSmith with Dify + +Configure LangSmith in the Dify application. Open the application you need to monitor, open **Monitoring** in the side menu, and select **Tracing app performance** on the page. + +![Tracing app performance](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/b6c7e5d4c2ca2092d59465cca27bc69c.png) + +After clicking configure, paste the **API Key** and **project name** created in LangSmith into the configuration and save. + +![Configure LangSmith](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/93dfabcadb7b2ff597f54beb5e642124.png) + + +The configured project name needs to match the project set in LangSmith. If the project names do not match, LangSmith will automatically create a new project during data synchronization. + + +Once successfully saved, you can view the monitoring status on the current page. + +![View configuration status](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/43369dc4de8f606c166fae2efab97d73.png) + +### Viewing Monitoring Data in LangSmith + +Once configured, the debug or production data from applications within Dify can be monitored in LangSmith. + +![Debugging Applications in Dify](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/a1370fdbb79257cba31a565ac6764802.png) + +When you switch to LangSmith, you can view detailed operation logs of Dify applications in the dashboard. + +![Viewing application data in LangSmith](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/2833b2ffa20927b5328e9624b065beea.png) + +Detailed LLM operation logs through LangSmith will help you optimize the performance of your Dify application. + +![Viewing application data in LangSmith](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/monitoring/integrate-external-ops-tools/beeb4ee50c80de8db7400c1f65727c8c.png) + +### Monitoring Data List + +#### **Workflow/Chatflow Trace Information** + +**Used to track workflows and chatflows** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
WorkflowLangSmith Chain
workflow\_app\_log\_id/workflow\_run\_idid
user\_session\_id- placed in metadata
workflow\_{id}name
start\_timestart\_time
end\_timeend\_time
inputsinputs
outputsoutputs
Model token consumptionusage\_metadata
metadataextra
errorerror
\[workflow]tags
"conversation\_id/none for workflow"conversation\_id in metadata
conversion\_idparent\_run\_id
+**Workflow Trace Info** + +* workflow\_id - Unique identifier of the workflow +* conversation\_id - Conversation ID +* workflow\_run\_id - ID of the current run +* tenant\_id - Tenant ID +* elapsed\_time - Time taken for the current run +* status - Run status +* version - Workflow version +* total\_tokens - Total tokens used in the current run +* file\_list - List of processed files +* triggered\_from - Source that triggered the current run +* workflow\_run\_inputs - Input data for the current run +* workflow\_run\_outputs - Output data for the current run +* error - Errors encountered during the current run +* query - Query used during the run +* workflow\_app\_log\_id - Workflow application log ID +* message\_id - Associated message ID +* start\_time - Start time of the run +* end\_time - End time of the run +* workflow node executions - Information about workflow node executions +* Metadata + * workflow\_id - Unique identifier of the workflow + * conversation\_id - Conversation ID + * workflow\_run\_id - ID of the current run + * tenant\_id - Tenant ID + * elapsed\_time - Time taken for the current run + * status - Run status + * version - Workflow version + * total\_tokens - Total tokens used in the current run + * file\_list - List of processed files + * triggered\_from - Source that triggered the current run + +#### **Message Trace Information** + +**Used to track LLM-related conversations** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ChatLangSmith LLM
message\_idid
user\_session\_id- placed in metadata
“message\_{id}"name
start\_timestart\_time
end\_timeend\_time
inputsinputs
outputsoutputs
Model token consumptionusage\_metadata
metadataextra
errorerror
\["message", conversation\_mode]tags
conversation\_idconversation\_id in metadata
conversion\_idparent\_run\_id
+**Message Trace Info** + +* message\_id - Message ID +* message\_data - Message data +* user\_session\_id - User session ID +* conversation\_model - Conversation mode +* message\_tokens - Number of tokens in the message +* answer\_tokens - Number of tokens in the answer +* total\_tokens - Total number of tokens in the message and answer +* error - Error information +* inputs - Input data +* outputs - Output data +* file\_list - List of processed files +* start\_time - Start time +* end\_time - End time +* message\_file\_data - File data associated with the message +* conversation\_mode - Conversation mode +* Metadata + * conversation\_id - Conversation ID + * ls\_provider - Model provider + * ls\_model\_name - Model ID + * status - Message status + * from\_end\_user\_id - ID of the sending user + * from\_account\_id - ID of the sending account + * agent\_based - Whether the message is agent-based + * workflow\_run\_id - Workflow run ID + * from\_source - Message source + +#### **Moderation Trace Information** + +**Used to track conversation moderation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModerationLangSmith Tool
user\_id- placed in metadata
“moderation"name
start\_timestart\_time
end\_timeend\_time
inputsinputs
outputsoutputs
metadataextra
\[moderation]tags
message\_idparent\_run\_id
+**Moderation Trace Info** + +* message\_id - Message ID +* user\_id: User ID +* workflow\_app\_log\_id - Workflow application log ID +* inputs - Moderation input data +* message\_data - Message data +* flagged - Whether the content is flagged for attention +* action - Specific actions taken +* preset\_response - Preset response +* start\_time - Moderation start time +* end\_time - Moderation end time +* Metadata + * message\_id - Message ID + * action - Specific actions taken + * preset\_response - Preset response + +#### **Suggested Question Trace Information** + +**Used to track suggested questions** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Suggested QuestionLangSmith LLM
user\_id- placed in metadata
suggested\_questionname
start\_timestart\_time
end\_timeend\_time
inputsinputs
outputsoutputs
metadataextra
\[suggested\_question]tags
message\_idparent\_run\_id
+**Message Trace Info** + +* message\_id - Message ID +* message\_data - Message data +* inputs - Input content +* outputs - Output content +* start\_time - Start time +* end\_time - End time +* total\_tokens - Number of tokens +* status - Message status +* error - Error information +* from\_account\_id - ID of the sending account +* agent\_based - Whether the message is agent-based +* from\_source - Message source +* model\_provider - Model provider +* model\_id - Model ID +* suggested\_question - Suggested question +* level - Status level +* status\_message - Status message +* Metadata + * message\_id - Message ID + * ls\_provider - Model provider + * ls\_model\_name - Model ID + * status - Message status + * from\_end\_user\_id - ID of the sending user + * from\_account\_id - ID of the sending account + * workflow\_run\_id - Workflow run ID + * from\_source - Message source + +#### **Dataset Retrieval Trace Information** + +**Used to track knowledge base retrieval** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Dataset RetrievalLangSmith Retriever
user\_id- placed in metadata
dataset\_retrievalname
start\_timestart\_time
end\_timeend\_time
inputsinputs
outputsoutputs
metadataextra
\[dataset\_retrieval]tags
message\_idparent\_run\_id
+**Dataset Retrieval Trace Info** + +* message\_id - Message ID +* inputs - Input content +* documents - Document data +* start\_time - Start time +* end\_time - End time +* message\_data - Message data +* Metadata + * message\_id - Message ID + * ls\_provider - Model provider + * ls\_model\_name - Model ID + * status - Message status + * from\_end\_user\_id - ID of the sending user + * from\_account\_id - ID of the sending account + * agent\_based - Whether the message is agent-based + * workflow\_run\_id - Workflow run ID + * from\_source - Message source + +#### **Tool Trace Information** + +**Used to track tool invocation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ToolLangSmith Tool
user\_id- placed in metadata
tool\_namename
start\_timestart\_time
end\_timeend\_time
inputsinputs
outputsoutputs
metadataextra
\["tool", tool\_name]tags
message\_idparent\_run\_id
+#### **Tool Trace Info** + +* message\_id - Message ID +* tool\_name - Tool name +* start\_time - Start time +* end\_time - End time +* tool\_inputs - Tool inputs +* tool\_outputs - Tool outputs +* message\_data - Message data +* error - Error information, if any +* inputs - Inputs for the message +* outputs - Outputs of the message +* tool\_config - Tool configuration +* time\_cost - Time cost +* tool\_parameters - Tool parameters +* file\_url - URL of the associated file +* Metadata + * message\_id - Message ID + * tool\_name - Tool name + * tool\_inputs - Tool inputs + * tool\_outputs - Tool outputs + * tool\_config - Tool configuration + * time\_cost - Time cost + * error - Error information, if any + * tool\_parameters - Tool parameters + * message\_file\_id - Message file ID + * created\_by\_role - Role of the creator + * created\_user\_id - User ID of the creator + +**Generate Name Trace Information** + +**Used to track conversation title generation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Generate NameLangSmith Tool
user\_id- placed in metadata
generate\_namename
start\_timestart\_time
end\_timeend\_time
inputsinputs
outputsoutputs
metadataextra
\[generate\_name]tags
+**Generate Name Trace Info** + +* conversation\_id - Conversation ID +* inputs - Input data +* outputs - Generated conversation name +* start\_time - Start time +* end\_time - End time +* tenant\_id - Tenant ID +* Metadata + * conversation\_id - Conversation ID + * tenant\_id - Tenant ID diff --git a/en/guides/monitoring/integrate-external-ops-tools/integrate-opik.mdx b/en/guides/monitoring/integrate-external-ops-tools/integrate-opik.mdx new file mode 100644 index 00000000..ff1c82be --- /dev/null +++ b/en/guides/monitoring/integrate-external-ops-tools/integrate-opik.mdx @@ -0,0 +1,573 @@ +--- +title: Integrate Opik +--- + + +### What is Opik + +Opik is an open-source platform designed for evaluating, testing, and monitoring large language model (LLM) applications. Developed by Comet, it aims to facilitate more intuitive collaboration, testing, and monitoring of LLM-based applications. + + +For more details, please refer to [Opik](https://www.comet.com/site/products/opik/). + + +--- + +### How to Configure Opik + +#### 1. Register/Login to [Opik](https://www.comet.com/signup?from=llm) + +#### 2. Get your Opik API Key + +Retrieve your Opik API Key from the user menu at the top-right. Click on **API Key**, then on the API Key to copy it: + +![Opik API Key](https://assets-docs.dify.ai/2025/01/a66603f01e4ffaa593a8b78fcf3f8204.png) + +#### 3. Integrating Opik with Dify + +Configure Opik in the Dify application. Open the application you need to monitor, open **Monitoring** in the side menu, and select **Tracing app performance** on the page. + +![Tracing app performance](https://assets-docs.dify.ai/2025/01/9d52a244e3b6cef1874ee838cd976111.png) + +After clicking configure, paste the **API Key** and **project name** created in Opik into the configuration and save. + +![Configure Opik](https://assets-docs.dify.ai/2025/01/7f4c436e2dc9fe94a3ed49219bb3360c.png) + +Once successfully saved, you can view the monitoring status on the current page. + +### Viewing Monitoring Data in Opik + +Once configured, you can debug or use the Dify application as usual. All usage history can be monitored in Opik. + +![Viewing application data in Opik](https://assets-docs.dify.ai/2025/01/a1c5aa80325e6d0223d48a178393baec.png) + +When you switch to Opik, you can view detailed operation logs of Dify applications in the dashboard. + +![Viewing application data in Opik](https://assets-docs.dify.ai/2025/01/09601d45eaf8ed90a4dfb07c34de36ff.png) + +Detailed LLM operation logs through Opik will help you optimize the performance of your Dify application. + +![Viewing application data in Opik](https://assets-docs.dify.ai/2025/01/708533b4fc616f852b5601fe602e3ef5.png) + +### Monitoring Data List + +#### **Workflow/Chatflow Trace Information** + +**Used to track workflows and chatflows** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
WorkflowOpik Trace
workflow_app_log_id/workflow_run_idid
user_session_id- placed in metadata
workflow\_{id}name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
Model token consumptionusage_metadata
metadatametadata
errorerror
\[workflow]tags
"conversation_id/none for workflow"conversation_id in metadata
+**Workflow Trace Info** + +- workflow_id - Unique identifier of the workflow +- conversation_id - Conversation ID +- workflow_run_id - ID of the current run +- tenant_id - Tenant ID +- elapsed_time - Time taken for the current run +- status - Run status +- version - Workflow version +- total_tokens - Total tokens used in the current run +- file_list - List of processed files +- triggered_from - Source that triggered the current run +- workflow_run_inputs - Input data for the current run +- workflow_run_outputs - Output data for the current run +- error - Errors encountered during the current run +- query - Query used during the run +- workflow_app_log_id - Workflow application log ID +- message_id - Associated message ID +- start_time - Start time of the run +- end_time - End time of the run +- workflow node executions - Information about workflow node executions +- Metadata + - workflow_id - Unique identifier of the workflow + - conversation_id - Conversation ID + - workflow_run_id - ID of the current run + - tenant_id - Tenant ID + - elapsed_time - Time taken for the current run + - status - Run status + - version - Workflow version + - total_tokens - Total tokens used in the current run + - file_list - List of processed files + - triggered_from - Source that triggered the current run + +#### **Message Trace Information** + +**Used to track LLM-related conversations** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ChatOpik LLM
message_idid
user_session_id- placed in metadata
"llm"name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
Model token consumptionusage_metadata
metadatametadata
\["message", conversation_mode]tags
conversation_idconversation_id in metadata
+**Message Trace Info** + +- message_id - Message ID +- message_data - Message data +- user_session_id - User session ID +- conversation_model - Conversation mode +- message_tokens - Number of tokens in the message +- answer_tokens - Number of tokens in the answer +- total_tokens - Total number of tokens in the message and answer +- error - Error information +- inputs - Input data +- outputs - Output data +- file_list - List of processed files +- start_time - Start time +- end_time - End time +- message_file_data - File data associated with the message +- conversation_mode - Conversation mode +- Metadata + - conversation_id - Conversation ID + - ls_provider - Model provider + - ls_model_name - Model ID + - status - Message status + - from_end_user_id - ID of the sending user + - from_account_id - ID of the sending account + - agent_based - Whether the message is agent-based + - workflow_run_id - Workflow run ID + - from_source - Message source + +#### **Moderation Trace Information** + +**Used to track conversation moderation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModerationOpik Tool
user_id- placed in metadata
“moderation"name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
metadatametadata
\["moderation"]tags
+**Moderation Trace Info** + +- message_id - Message ID +- user_id: User ID +- workflow_app_log_id - Workflow application log ID +- inputs - Moderation input data +- message_data - Message data +- flagged - Whether the content is flagged for attention +- action - Specific actions taken +- preset_response - Preset response +- start_time - Moderation start time +- end_time - Moderation end time +- Metadata + - message_id - Message ID + - action - Specific actions taken + - preset_response - Preset response + +#### **Suggested Question Trace Information** + +**Used to track suggested questions** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Suggested QuestionOpik LLM
user_id- placed in metadata
"suggested_question"name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
metadatametadata
\["suggested_question"]tags
+**Message Trace Info** + +- message_id - Message ID +- message_data - Message data +- inputs - Input content +- outputs - Output content +- start_time - Start time +- end_time - End time +- total_tokens - Number of tokens +- status - Message status +- error - Error information +- from_account_id - ID of the sending account +- agent_based - Whether the message is agent-based +- from_source - Message source +- model_provider - Model provider +- model_id - Model ID +- suggested_question - Suggested question +- level - Status level +- status_message - Status message +- Metadata + - message_id - Message ID + - ls_provider - Model provider + - ls_model_name - Model ID + - status - Message status + - from_end_user_id - ID of the sending user + - from_account_id - ID of the sending account + - workflow_run_id - Workflow run ID + - from_source - Message source + +#### **Dataset Retrieval Trace Information** + +**Used to track knowledge base retrieval** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Dataset RetrievalOpik Retriever
user_id- placed in metadata
"dataset_retrieval"name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
metadatametadata
\["dataset_retrieval"]tags
message_idparent_run_id
+**Dataset Retrieval Trace Info** + +- message_id - Message ID +- inputs - Input content +- documents - Document data +- start_time - Start time +- end_time - End time +- message_data - Message data +- Metadata + - message_id - Message ID + - ls_provider - Model provider + - ls_model_name - Model ID + - status - Message status + - from_end_user_id - ID of the sending user + - from_account_id - ID of the sending account + - agent_based - Whether the message is agent-based + - workflow_run_id - Workflow run ID + - from_source - Message source + +#### **Tool Trace Information** + +**Used to track tool invocation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ToolOpik Tool
user_id- placed in metadata
tool_namename
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
metadatametadata
\["tool", tool_name]tags
+#### **Tool Trace Info** + +- message_id - Message ID +- tool_name - Tool name +- start_time - Start time +- end_time - End time +- tool_inputs - Tool inputs +- tool_outputs - Tool outputs +- message_data - Message data +- error - Error information, if any +- inputs - Inputs for the message +- outputs - Outputs of the message +- tool_config - Tool configuration +- time_cost - Time cost +- tool_parameters - Tool parameters +- file_url - URL of the associated file +- Metadata + - message_id - Message ID + - tool_name - Tool name + - tool_inputs - Tool inputs + - tool_outputs - Tool outputs + - tool_config - Tool configuration + - time_cost - Time cost + - error - Error information, if any + - tool_parameters - Tool parameters + - message_file_id - Message file ID + - created_by_role - Role of the creator + - created_user_id - User ID of the creator + +**Generate Name Trace Information** + +**Used to track conversation title generation** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Generate NameOpik Tool
user_id- placed in metadata
"generate_conversation_name"name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
metadatametadata
\["generate_name"]tags
+**Generate Name Trace Info** + +- conversation_id - Conversation ID +- inputs - Input data +- outputs - Generated conversation name +- start_time - Start time +- end_time - End time +- tenant_id - Tenant ID +- Metadata + - conversation_id - Conversation ID + - tenant_id - Tenant ID \ No newline at end of file diff --git a/en/guides/workflow/README.mdx b/en/guides/workflow/README.mdx new file mode 100644 index 00000000..ef72f36e --- /dev/null +++ b/en/guides/workflow/README.mdx @@ -0,0 +1,47 @@ +--- +title: Introduction +--- + +### Basic Introduction + +Workflows reduce system complexity by breaking down complex tasks into smaller steps (nodes), reducing reliance on prompt engineering and model inference capabilities, and enhancing the performance of LLM applications for complex tasks. This also improves the system's interpretability, stability, and fault tolerance. + +Dify workflows are divided into two types: + +* **Chatflow**: Designed for conversational scenarios, including customer service, semantic search, and other conversational applications that require multi-step logic in response construction. +* **Workflow**: Geared towards automation and batch processing scenarios, suitable for high-quality translation, data analysis, content generation, email automation, and more. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/65e9ed3fb88e03f631b3862acc2479e4.png) + +To address the complexity of user intent recognition in natural language input, Chatflow provides question understanding nodes. Compared to Workflow, it adds support for Chatbot features such as conversation history (Memory), annotated replies, and Answer nodes. + +To handle complex business logic in automation and batch processing scenarios, Workflow offers a variety of logic nodes, such as code nodes, IF/ELSE nodes, template transformation, iteration nodes, and more. Additionally, it provides capabilities for timed and event-triggered actions, facilitating the construction of automated processes. + +### Common Use Cases + +* Customer Service + +By integrating LLM into your customer service system, you can automate responses to common questions, reducing the workload of your support team. LLM can understand the context and intent of customer queries and generate helpful and accurate answers in real-time. + +* Content Generation + +Whether you need to create blog posts, product descriptions, or marketing materials, LLM can assist by generating high-quality content. Simply provide an outline or topic, and LLM will use its extensive knowledge base to produce engaging, informative, and well-structured content. + +* Task Automation + +LLM can be integrated with various task management systems like Trello, Slack, and Lark to automate project and task management. Using natural language processing, LLM can understand and interpret user input, create tasks, update statuses, and assign priorities without manual intervention. + +* Data Analysis and Reporting + +LLM can analyze large datasets and generate reports or summaries. By providing relevant information to LLM, it can identify trends, patterns, and insights, transforming raw data into actionable intelligence. This is particularly valuable for businesses looking to make data-driven decisions. + +* Email Automation + +LLM can be used to draft emails, social media updates, and other forms of communication. By providing a brief outline or key points, LLM can generate well-structured, coherent, and contextually relevant messages. This saves significant time and ensures your responses are clear and professional. + +### How to Get Started + +* Start by building a workflow from scratch or use system templates to help you get started. +* Get familiar with basic operations, including creating nodes on the canvas, connecting and configuring nodes, debugging workflows, and viewing run history. +* Save and publish a workflow. +* Run the published application or call the workflow through an API. diff --git a/en/user-guide/build-app/flow-app/additional-features.mdx b/en/guides/workflow/additional-features.mdx similarity index 83% rename from en/user-guide/build-app/flow-app/additional-features.mdx rename to en/guides/workflow/additional-features.mdx index 5dc79d0f..fcf19a77 100644 --- a/en/user-guide/build-app/flow-app/additional-features.mdx +++ b/en/guides/workflow/additional-features.mdx @@ -1,29 +1,31 @@ --- title: Additional Features -version: 'English' --- + Both Workflow and Chatflow applications support enabling additional features to enhance the user interaction experience. For example, adding a file upload entry, giving the LLM application a self-introduction, or using a welcome message can provide users with a richer interactive experience. Click the **"Features"** button in the upper right corner of the application to add more functionality. #### Workflow +> Note: This method of adding file uploads to Workflow applications is deprecated. We recommend adding custom file variables on the start node instead. + Workflow type applications only support the **"Image Upload"** feature. When enabled, an image upload entry will appear on the usage page of the Workflow application. @@ -35,10 +37,6 @@ Workflow type applications only support the **"Image Upload"** feature. When ena Finally, select the output variable of the LLM node in the END node to complete the setup. - - Setting interface for enabling visual analysis capabilities in LLM nodes - - #### Chatflow Chatflow type applications support the following features: @@ -51,15 +49,10 @@ Chatflow type applications support the following features: Automatically add suggestions for the next question after the conversation is complete, to increase the depth and frequency of dialogue topics. * **Text-to-Speech** - Add an audio playback button in the Q&A text box, using a TTS service (needs to be set up in Model Providers) to read out the text. + Add an audio playback button in the Q\&A text box, using a TTS service (needs to be set up in Model Providers) to read out the text. * **File Upload** Supports the following file types: documents, images, audio, video, and other file types. After enabling this feature, application users can upload and update files at any time during the application dialogue. A maximum of 10 files can be uploaded simultaneously, with a size limit of 15MB per file. - - - Chatflow应用中文件上传功能的设置界面 - - * **Citation and Attribution** Commonly used in conjunction with the "Knowledge Retrieval" node to display the reference source documents and attribution parts of the LLM's responses. @@ -75,9 +68,7 @@ This section will mainly introduce the specific usage of the **File Upload** fea **For application users:** Chatflow applications with file upload enabled will display a "paperclip" icon on the right side of the dialogue box. Click it to upload files and interact with the LLM. - - Upload file - +![Upload file](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/b18af11da3f339c496193d9732906849.png) **For application developers:** @@ -106,9 +97,7 @@ The orchestration steps are as follows: 2. Add an LLM node, enable the VISION feature, and select the `sys.files` variable. 3. Add a "Answer" node at the end, filling in the output variable of the LLM node. - - Enable vision - +![Enable vision](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/3a3582bd9bc8ea94bbdbfeefe6a78571.png) * **Mixed File Types** @@ -121,10 +110,8 @@ If you want the application to have the ability to process both document files a After the application user uploads both document files and images, document files are automatically diverted to the document extractor node, and image files are automatically diverted to the LLM node to achieve joint processing of files. - - Mixed File Types - +![Mixed File Types](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/66471e8e67b2ede0c94bfa1cffeab834.png) * **Audio and Video Files** -LLMs do not yet support direct reading of audio and video files, and the Dify platform has not yet built-in related file processing tools. Application developers can refer to [External Data Tools](/extension/api-based-extension/external-data-tool) to integrate tools for processing file information themselves. \ No newline at end of file +LLMs do not yet support direct reading of audio and video files, and the Dify platform has not yet built-in related file processing tools. Application developers can refer to [External Data Tools](../extension/api-based-extension/external-data-tool.md) to integrate tools for processing file information themselves. diff --git a/en/guides/workflow/bulletin.mdx b/en/guides/workflow/bulletin.mdx new file mode 100644 index 00000000..35dd0e02 --- /dev/null +++ b/en/guides/workflow/bulletin.mdx @@ -0,0 +1,113 @@ +--- +title: Bulletin - Image Upload Replaced by File Upload +--- + +The image upload feature has been integrated into the more comprehensive [File Upload](file-upload.md) functionality. To avoid redundant features, we have decided to upgrade and adjust the “[Features](additional-features.md)” for Workflow and Chatflow applications as follows: + +* The image upload option in Chatflow’s “Features” has been removed and replaced by the new “File Upload” feature. Within the “File Upload” feature, you can select the image file type. Additionally, the image upload icon in the application dialog has been replaced with a file upload icon. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/edeb9bfe8cafc8c7fa242bf895dfb850.png) + +* The image upload option in Workflow’s “Features” and the `sys.files` [variable](variables.md) will be deprecated in the future. Both have been marked as `LEGACY`, and developers are encouraged to use custom file variables to add file upload functionality to Workflow applications. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/df9f74f5362f31672ebe3a166f0a419a.png) + +### Why Replace the “Image Upload” Feature? + +Previously, Dify only supported image file uploads. In the latest version, a more comprehensive file upload capability has been introduced, supporting documents, images, audio, video, and custom file formats. + +**Image uploading is now part of the broader “File Upload” feature.** When adding the file upload feature, developers can simply check the “image” file type to enable image uploads. + +To avoid confusion caused by redundant features, we have decided to replace the standalone image upload feature in Chatflow applications with the more comprehensive file upload capability, and no longer recommend enabling image upload for Workflow applications. + +### More Comprehensive Functionality: File Upload + +To enhance the information processing capabilities of your applications, we have introduced the “File Upload” feature in this update. Unlike chat text, document files can carry a large amount of information, such as academic reports or legal contracts. + +* The file upload feature allows files to be uploaded, parsed, referenced, and downloaded as file variables within Workflow applications. +* Developers can now easily build applications capable of understanding and processing complex tasks involving images, audio, and video. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/05bf5ca6ea2f384cc171e3041a6f0751.png) + +We no longer recommend using the standalone “Image Upload” feature and instead suggest transitioning to the more comprehensive “File Upload” feature to improve the application experience. + +### What You Need to Do? + +#### For Dify Cloud Users: + +* **Chatflow Applications** + +If you have already created Chatflow applications with the “Image Upload” feature enabled and activated the Vision feature in the LLM node, the system will automatically switch the feature, and it will not affect the application’s image upload capability. If you need to update and republish the application, select the file variable in the Vision variable selection box of the LLM node, clear the item from the checklist, and republish the application.\ + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/bc6e9e3a57844f5f1e50b4716592449d.png) + +If you wish to add the “Image Upload” feature to a Chatflow application, enable “File Upload” in the features and select only the “image” file type. Then enable the Vision feature in the LLM node and specify the sys.files variable. The upload entry will appear as a “paperclip” icon. For detailed instructions, refer to [Additional Features](additional-features.md). + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/d8c447886a9bdea1aeeda8991b46742e.png) + +* **Workflow Applications** + +If you have already created Workflow applications with the “Image Upload” feature enabled and activated the Vision feature in the LLM node, this change will not affect you immediately, but you will need to complete manual migration before the official deprecation. + +If you wish to enable the “Image Upload” feature for a Workflow application, add a file variable in the [Start](node/start.md) node. Then, reference this file variable in subsequent nodes instead of using the `sys.files` variable. + +#### For Dify Community Edition or Self-hosted Enterprise Users: + +After upgrading to version v0.10.0, you will see the “File Upload” feature. + +* Chatflow Applications: + +Chatflow applications with the “Image Upload” feature enabled will automatically switch to the file upload feature, with no changes required. + +If you wish to add the “Image Upload” feature to a Chatflow application, refer to the Additional Features section for detailed instructions. + +* Workflow Applications: + +Existing Workflow applications will not be affected, but please complete the manual migration before the official deprecation. + +If you wish to enable the “Image Upload” feature for a Workflow application, add a file variable in the [Start](node/start.md) node. Then, reference this file variable in subsequent nodes instead of using the `sys.files` variable.\ + +### FAQs: + +#### 1. Will This Update Affect My Existing Applications? + +* Existing Chatflow applications will automatically migrate, seamlessly switching image upload capabilities to the file upload feature. The `sys.files` variable will still be used as the default Vision input. The image upload entry in the application interface will be replaced with a file upload entry. +* Existing Workflow applications will not be affected for now. The `sys.files` variable and the image upload feature have been marked as `LEGACY`, but they can still be used. However, these `LEGACY` features will be deprecated in the future, and a manual update will be required at that time. + +#### 2. Do I Need to Update My Applications Immediately? + +* For Chatflow applications, the system will automatically migrate, and no manual updates are required. +* For Workflow applications, although an immediate update is not necessary, we recommend familiarizing yourself with the new file upload feature to prepare for future migration. + +#### 3. How Can I Ensure My Applications Are Compatible with the New File Upload Feature? + +For Chatflow applications: + +• Check if the file upload option is enabled in the features configuration. + +• Ensure you’re using an LLM with Vision capabilities, and turn on the Vision toggle. + +• Verify that `sys.files` is correctly selected as the input item in the Vision box. + +For Workflow applications: + +• Create a file-type variable in the “Start” node. + +• Reference this file variable in subsequent nodes instead of using the LEGACY `sys.files` variable. + +#### 4. How to handle missing image upload icons in previously published Chatflow applications? + +It is recommended to republish the application, and the file upload icon will appear in the application's chat box. + +#### We Value Your Feedback + +As a key member of the Dify community, your experience and feedback are crucial to us. We warmly invite you to: + +1. Try the file upload feature and experience its convenience and flexibility. +2. Share your thoughts and suggestions via the following channels: + + • [GitHub](https://github.com/langgenius/dify) + + • [Discord channel](https://discord.com/invite/FngNHpbcY7) + +Your feedback will help us continuously improve the product and provide a better experience for the entire community. diff --git a/en/guides/workflow/debug-and-preview/checklist.mdx b/en/guides/workflow/debug-and-preview/checklist.mdx new file mode 100644 index 00000000..5dab42d1 --- /dev/null +++ b/en/guides/workflow/debug-and-preview/checklist.mdx @@ -0,0 +1,8 @@ +--- +title: Checklist +--- + + +Before publishing the App, you can check the checklist to see if there are any nodes with incomplete configurations or that have not been connected. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/debug-and-preview/be2b57394186c9b607a94614793dd5ca.png) diff --git a/en/guides/workflow/debug-and-preview/history.mdx b/en/guides/workflow/debug-and-preview/history.mdx new file mode 100644 index 00000000..7a84b7d4 --- /dev/null +++ b/en/guides/workflow/debug-and-preview/history.mdx @@ -0,0 +1,8 @@ +--- +title: Run History +--- + + +In the "Run History," you can view the run results and log information from the historical debugging of the current workflow. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/debug-and-preview/e6b7b1b00b43109d42cc265ed77ab58f.png) diff --git a/en/guides/workflow/debug-and-preview/log.mdx b/en/guides/workflow/debug-and-preview/log.mdx new file mode 100644 index 00000000..2aef418d --- /dev/null +++ b/en/guides/workflow/debug-and-preview/log.mdx @@ -0,0 +1,10 @@ +--- +title: Conversation/Run Logs +--- + + +Clicking **"Run History - View Log — Details"** allows you to see a comprehensive overview of the run in the details section. This includes information on inputs and outputs, metadata, and other relevant data. + +This detailed information enables you to review various aspects of each node throughout the complete execution process of the workflow. You can examine inputs and outputs, analyze token consumption, evaluate runtime duration, and assess other pertinent metrics. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/debug-and-preview/4eda6b3f833f12ae2ce1a23348678f49.png) diff --git a/en/guides/workflow/debug-and-preview/preview-and-run.mdx b/en/guides/workflow/debug-and-preview/preview-and-run.mdx new file mode 100644 index 00000000..2503bedd --- /dev/null +++ b/en/guides/workflow/debug-and-preview/preview-and-run.mdx @@ -0,0 +1,14 @@ +--- +title: Preview and Run +--- + + +Dify Workflow offers a comprehensive set of execution and debugging features. In conversational applications, clicking "Preview" enters debugging mode. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/debug-and-preview/91379dc42d0d815e52ddad0cc5450a46.png) + +In workflow applications, clicking "Run" enters debugging mode. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/debug-and-preview/b92d7536392b1e1f2423d0e3aa113915.png) + +Once in debugging mode, you can debug the configured workflow using the interface on the right side of the screen. diff --git a/en/guides/workflow/debug-and-preview/step-run.mdx b/en/guides/workflow/debug-and-preview/step-run.mdx new file mode 100644 index 00000000..7225238b --- /dev/null +++ b/en/guides/workflow/debug-and-preview/step-run.mdx @@ -0,0 +1,12 @@ +--- +title: Step Run +--- + + +Workflow supports step-by-step debugging of nodes, where you can repetitively test whether the execution of the current node meets expectations. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/debug-and-preview/36e547165a5088510c99baee4ce42bcd.png) + +After running a step test, you can review the execution status, input/output, and metadata information. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/debug-and-preview/040e1051d33b94d35e4683d3c89691a8.png) diff --git a/en/guides/workflow/error-handling/README.mdx b/en/guides/workflow/error-handling/README.mdx new file mode 100644 index 00000000..0ba28769 --- /dev/null +++ b/en/guides/workflow/error-handling/README.mdx @@ -0,0 +1,147 @@ +--- +title: Introduction +--- + +Workflow applications often comprise multiple interconnected nodes operating in sequence. When an error occurs—such as an API request failure, an LLM output issue, or an unexpected exception—it can disrupt the entire process. Such disruptions force developers to spend significant time troubleshooting, especially in workflows with complex node dependencies. + +**Error handling** introduces robust strategies to manage node failures effectively. These feature allow workflows to log and monitor errors without halting execution or switch seamlessly to predefined fallback paths, ensuring task continuity. **Developers can significantly improve application reliability and operational resilience by integrating strong error-handling feature into critical nodes.** + +Developers no longer need to handle potential node errors by embedding complex logic within nodes or adding extra nodes. The error-handling feature simplifies workflow design, enabling streamlined execution through various predefined strategies. + +### Application Scenarios + +#### 1. Handling Network Exceptions + +**Example**: In a workflow that retrieves and aggregates data from three API services (such as weather services, news summaries, and social media analysis), one service might fail to respond due to request limits, causing data retrieval to fail. With the error-handling function, the main process can continue using the data from the other two successful services while logging the failed API request. This log helps developers analyze the issue later and refine their service call strategies. + +#### 2. Backup Workflow Design + +**Example**: An LLM node tasked with generating detailed document summaries may encounter token limit errors when processing lengthy input. The workflow can switch to a backup path by setting the "Fail branch" on Error-handling Feature. + +For instance, a code node on the alternative path can split the content into smaller chunks and re-invoke the LLM node, preventing the workflow from breaking down. + +#### 3. Predefined Error Messages + +**Example**: When running a workflow, you might occasionally encounter a node returning vague error messages (such as a simple "request failed"), complicating pinpointing the issue quickly. Developers can write predefined error messages within the error handling feature to provide more explicit and more precise error information for subsequent application debugging. + +### Error Handling Feature + +The following four types of nodes have added error-handling feature. Click on the title to read the detailed documents: + +* [LLM](../node/llm.md) +* [HTTP](../node/http-request.md) +* [Code](../node/code.md) +* [Tools](../node/tools.md) + +**Retry on Failure** + +Some exceptions can be resolved by retrying the node. In this case, you can enable the **Retry on Failure** feature in the node and set the number of max retries and the retry interval. + +![](https://assets-docs.dify.ai/2024/12/18097e4c94b67a79150b967fc50f9f43.png) + +If an error is still reported after retrying the node, the next process will be run according to the predefined strategy in the Error Handling feature. + +**Error Handling** + +The error handling feature provides the following three options: + +• **None**: Do not handle the exception, directly throw the node's error message and interrupt the entire process. + +• **Default Value**: Allows developers to predefine exception messages. After an exception occurs, use the predefined value to replace the original built-in error output message of the node. + +• **Fail Branch**: Execute the pre-arranged fail branch after an exception occurs. + +For explanations and configuration methods of each strategy, please refer to the [predefined error handling logic](predefined-error-handling-logic.md). + +![Error handling](https://assets-docs.dify.ai/2024/12/3c198be3a7b9c1f9649bbd8b9a0a9ec5.png) + +### Quick Start + +Scenario: Enabling Error-handling feature for Workflow Application + +Error Handling feature for Code Output in Workflow Applications The following example demonstrates how to implement error handling feature within a workflow application, using fail branch to handle node exceptions. + +The idea of the workflow design: An LLM node generates JSON code content (either correctly or incorrectly formatted) based on user's input instructions, which is then executed and output through Code Node A. + +If Code Node A receives incorrectly formatted JSON content, it follows the predefined error handling design, executing the backup path while continuing the main process. + +1. **Creating a JSON Code Generation Node** + +Create a new Workflow application and add both LLM and Code nodes. Use a Prompt to instruct the LLM to generate either correctly or incorrectly formatted JSON content, which will then be validated through Code Node A. + +**The Prompt reference of LLM node:** + +``` +You are a teaching assistant. According to the user's requirements, you only output a correct or incorrect sample code in json format. +``` + +**The JSON verification of Code Node:** + +```python +def main(json_str: str) -> dict: + obj = json.loads(json_str) + return {'result': obj} +``` + +2. **Enable Error Handling Feature for Node A** + +Node A is responsible for validating JSON content. If it receives incorrectly formatted JSON content, the error handling feature will be triggered and execute th backup path, allowing the subsequent LLM node to fix the incorrect content and revalidate the JSON, thereby continuing the main process. + +In the "Error Handling" tab of Node A, select "Fail Branch" and create a new LLM node. + + +3. **Correct the Error Output from Node A** + +In the new LLM node, fill in the prompt and reference the exception output from Node A using variables for correction. Add Node B to revalidate the JSON content. + +**4. End** + +Add a variable aggregation node to consolidate the results from both the correct and error branches and output them to the end node, completing the entire process. + +![](https://assets-docs.dify.ai/2024/12/059b5a814514cd9abe10f1f4077ed17f.png) + +> Click [here](https://assets-docs.dify.ai/2024/12/087861aa20e06bb4f8a2bef7e7ae0522.yml) to download the Demo DSL file. + +### Status Overview + +In workflow applications, understanding both node and workflow status is crucial for effective monitoring and troubleshooting. Let's explore how status indicators help developers track execution progress and handle exceptions efficiently. + +#### Node Status Types + +* **Success**: Every node runs properly - the node completes its task and produces the expected output. +* **Failure**: When error handling isn't enabled, the node stops working and reports an error. +* **Exception**: Even though an error occurs, the node doesn't completely fail because error handling (either default values or alternative paths) kicks in to manage the situation. + +#### Workflow Status Types + +* **Success**: A perfect run - all nodes complete their tasks successfully, and the workflow produces the intended output. +* **Failure**: The workflow stops completely due to an unhandled node error. +* **Partial Success**: Think of this as a "managed failure" - while some nodes encounter issues, error handling mechanisms keep the workflow moving forward to completion. + +### FAQ + +1. **What is the difference before and after enabling the exception handling mechanism?** + +#### Before Implementation + +Without error handling, workflows are quite fragile: + +* A single node failure (like an LLM timeout or network hiccup) brings everything to a halt +* Developers must manually investigate and fix issues before restarting +* Creating workarounds means building complex, redundant safety nets +* Error messages tend to be vague and unhelpful + +#### After Implementation + +Error handling transforms your workflow into a more resilient system: + +* Workflows keep running even when things go wrong +* Developers can create custom responses for different types of errors +* The overall design becomes cleaner and more maintainable +* Detailed error logging makes troubleshooting much faster + +*** + +2. **How to debug the execution of backup paths?** + +Need to check if your error handling is working? It's simple - just look for yellow-highlighted paths in your workflow logs. These visual indicators show exactly when and where your backup error handling routes are being used. diff --git a/en/guides/workflow/error-handling/error-type.mdx b/en/guides/workflow/error-handling/error-type.mdx new file mode 100644 index 00000000..6ce5313d --- /dev/null +++ b/en/guides/workflow/error-handling/error-type.mdx @@ -0,0 +1,121 @@ +--- +title: Error Type +--- + + +This article summarizes the potential exceptions and corresponding error types that may occur in different types of nodes. + +### General Error + +* **System error** + + Typically caused by system issues such as a disabled sandbox service or network connection problems. +* **Operational Error** + + Occurs when developers are unable to configure or run the node correctly. + +### Code Node + +[Code](../node/code.md) nodes support running Python and JavaScript code for data transformation in workflows or chat flows. Here are 4 common runtime errors: + +1. **Code Node Error (CodeNodeError)** + + This error occurs due to exceptions in developer-written code, such as: missing variables, calculation logic errors, or treating string array inputs as string variables. You can locate the issue using the error message and exact line number. + +![Code Error](https://assets-docs.dify.ai/2024/12/c86b11af7f92368180ea1bac38d77083.png) + +2. **Sandbox Network Issues (System Error)** + + This error commonly occurs when there are network traffic or connection issues, such as when the sandbox service isn't running or proxy services have interrupted the network. You can resolve this through the following steps: + + 1. Check network service quality + 2. Start the sandbox service + 3. Verify proxy settings + +![Sandbox network issues](https://assets-docs.dify.ai/2024/12/d95007adf67c4f232e46ec455c348e2c.PNG) + +3. **Depth Limit Error (DepthLimitError)** + + The current node's default configuration only supports up to 5 levels of nested structures. An error will occur if it exceeds 5 levels. + +![OutputValidationError](https://assets-docs.dify.ai/2024/12/5649d52a6e80ddd4180b336266701f7b.png) + +4. **Output Validation Error (OutputValidationError)** + + An error occurs when the actual output variable type doesn't match the selected output variable type. Developers need to change the selected output variable type to avoid this issue. + +![](https://assets-docs.dify.ai/2024/12/ab8cae01a590b037017dfe9ea4dbbb8b.png) + +### LLM Node + +The [LLM](../node/llm.md) node is a core component of Chatflow and Workflow, utilizing LLM' capabilities in dialogue, generation, classification, and processing to complete various tasks based on user input instructions. + +Here are 6 common runtime errors: + +1. **Variable Not Found Error (VariableNotFoundError)** + + This error occurs when the LLM cannot find system prompts or variables set in the context. Application developers can resolve this by replacing the problematic variables. + +![](https://assets-docs.dify.ai/2024/12/f20c5fbde345144de6183374ab277662.png) + +2. **Invalid Context Structure Error (InvalidContextStructureError)** + + An error occurs when the context within the LLM node receives an invalid data structure (such as `array[object]`). + + > Context only supports string (String) data structures. + +![InvalidContextStructureError](https://assets-docs.dify.ai/2024/12/f20c5fbde345144de6183374ab277662.png) + +3. **Invalid Variable Type Error (InvalidVariableTypeError)** + + This error appears when the system prompt type is not in the standard Prompt text format or Jinja syntax format. +4. **Model Not Exist Error (ModelNotExistError)** + + Each LLM node requires a configured model. This error occurs when no model is selected. +5. **LLM Authorization Required Error (LLMModeRequiredError)** + + The model selected in the LLM node has no configured API Key. You can refer to the documentation for model authorization. +6. **No Prompt Found Error (NoPromptFoundError)** + + An error occurs when the LLM node's prompt is empty, as prompts cannot be blank. + +![](https://assets-docs.dify.ai/2024/12/9882f7a5ee544508ba11b51fb469a911.png) + +### HTTP + +[HTTP](../node/http-request.md) nodes allow seamless integration with external services through customizable requests for data retrieval, webhook triggering, image generation, or file downloads via HTTP requests. Here are 5 common errors for this node: + +1. **Authorization Configuration Error (AuthorizationConfigError)** + + This error occurs when authentication information (Auth) is not configured. +2. **File Fetch Error (FileFetchError)** This error appears when file variables cannot be retrieved. +3. **Invalid HTTP Method Error (InvalidHttpMethodError)** + + An error occurs when the request header method is not one of the following: GET, HEAD, POST, PUT, PATCH, or DELETE. +4. **Response Size Error (ResponseSizeError)** + + HTTP response size is limited to 10MB. An error occurs if the response exceeds this limit. +5. **HTTP Response Code Error (HTTPResponseCodeError)** An error occurs when the request response returns a code that doesn't start with 2 (such as 200, 201). If exception handling is enabled, errors will occur for status codes 400, 404, and 500; otherwise, these won't trigger errors. + +### Tool + +The following 3 errors commonly occur during runtime: + +1. **Tool Execution Error (ToolNodeError)** + + An error that occurs during tool execution itself, such as when reaching the target API's request limit. + + + + ![](https://assets-docs.dify.ai/2024/12/84af0831b7cb23e64159dfbba80e9b28.jpg) +2. **Tool Parameter Error (ToolParameterError)** + + An error occurs when the configured tool node parameters are invalid, such as passing parameters that don't match the tool node's defined parameters. +3. **Tool File Processing Error (ToolFileError)** + + An error occurs when the tool node cannot find the required files. + + + + + diff --git a/en/guides/workflow/error-handling/predefined-error-handling-logic.mdx b/en/guides/workflow/error-handling/predefined-error-handling-logic.mdx new file mode 100644 index 00000000..57482bcf --- /dev/null +++ b/en/guides/workflow/error-handling/predefined-error-handling-logic.mdx @@ -0,0 +1,78 @@ +--- +title: Predefined Error Handling Logic +--- + + +Here are four types of nodes that provide with predefined logic for handling unexpected situations: + +• [LLM](../node/llm.md) + +• [HTTP](../node/http-request.md) + +• [Code](../node/code.md) + +• [Tool](../node/tools.md) + +The error handling feature provides three predefined options: + +• **None**: Errors are not handled. The node throws its built-in error message, causing the entire workflow to stop. + +• **Default value**: Developers can predefine an alternative output for the node. If an error occurs, the workflow throws the predefined value instead of the node’s original error output, allowing the workflow process to continue seamlessly. + +• **Fail branch**: When an error occurs, a predefined error-handling branch is executed. This provides flexibility for developers to create alternative paths in the workflow to address the failure scenario. + +![](https://assets-docs.dify.ai/2024/12/6e2655949889d4d162945d840d698649.png) + +### Logic: None + +Default option for the node’s error-handling feature. If the node encounters a timeout or an error during execution, it directly throws the node’s built-in error message, immediately halting the entire workflow. The workflow execution is then recorded as failed. + +### Logic: Default Value + +This option lets developers customize a node’s error output through the default value editor, similar to the step-by-step debugging approach used in programming. It helps clarify issues, making the debugging process more transparent and efficient. + +For example: + +* For `object` and `array` data types, the system provides an intuitive JSON editor. +* For `number` and `string` data types, corresponding type-specific editors are available. + +When a node fails to execute, the workflow automatically uses the developer’s predefined default value to replace the original error output from the node, ensuring the workflow remains uninterrupted. Clearer error messages improve troubleshooting efficiency, allowing developers to focus on optimizing the workflow design. + +The predefined default value’s data type must match the node’s output variable type. For example, if the output variable of a code node is set to the data type `array[number]`, the default value must also be of the `array[number]` data type. + +![Error handling: default value](https://assets-docs.dify.ai/2024/12/e9e5e757090679243e0c9976093c7e6c.png) + +### Logic: Fail Branch + +If the current node encounters an error, it triggers the predefined fail branch. When you select the fail branch option, new connection points are provided for the node, allowing developers to continue designing the workflow or add downstream nodes by clicking the bottom-right corner of the node details. + +For instance, you might connect a mail tool node to send error notifications, providing real-time alerts when issues arise. + +> The fail branch is highlighted with orange. + +![](https://assets-docs.dify.ai/2024/12/e5ea1af947818bd9e27cab3042c1c4f3.png) + +A common approach to handling errors is enable fail branch on nodes. These nodes can address issues, and the corrected outputs can be merged back into the main flow by using variable aggregation nodes to ensure consistency in the final results. + +### Exception Variables + +When the **Default Value** or **Fail Branch** is selected for a node’s error handling, the node will transfer the error information to the downstream nodes using the `error_type` and `error_message` exception variables when it encounters an error. + + + + + + + + + + + + + + + + + + +
VariableDescriptions
`error_type`Error Types. Different types of nodes come with distinct error types. Developers can design tailored solutions based on these error identifiers.
`error_message`Error information. Specific fault information is output by the abnormal node. Developers can pass it to the downstream LLM node for repair or connect to the mailbox tool to push information.
diff --git a/en/guides/workflow/export_import.mdx b/en/guides/workflow/export_import.mdx new file mode 100644 index 00000000..e9ad1b8d --- /dev/null +++ b/en/guides/workflow/export_import.mdx @@ -0,0 +1,18 @@ +--- +title: Export/Import +--- + + +You can export/import application templates as YAML-format DSL (Domain Specific Language) files within the studio to share applications with your team members. + +To import a DSL file in the studio application list: + +![](/en/.gitbook/assets/guides/workflow/export-import/output (5) (2).png) + +To export a DSL file from the studio application list: + +![](/en/.gitbook/assets/guides/workflow/export-import/output (6) (1).png) + +To export a DSL file from the workflow orchestration page: + +![](/en/.gitbook/assets/guides/workflow/export-import/output (7) (1).png) diff --git a/en/user-guide/build-app/flow-app/file-upload.mdx b/en/guides/workflow/file-upload.mdx similarity index 58% rename from en/user-guide/build-app/flow-app/file-upload.mdx rename to en/guides/workflow/file-upload.mdx index ea286ab0..2989e222 100644 --- a/en/user-guide/build-app/flow-app/file-upload.mdx +++ b/en/guides/workflow/file-upload.mdx @@ -1,8 +1,8 @@ --- title: File Upload -version: 'English' --- + Compared to chat text, document files can contain vast amounts of information, such as academic reports and legal contracts. However, Large Language Models (LLMs) are inherently limited to processing only text or images, making it challenging to extract the rich contextual information within these files. As a result, application users often resort to manually copying and pasting large amounts of information to converse with LLMs, significantly increasing unnecessary operational overhead. The file upload feature addresses this limitation by allowing files to be uploaded, parsed, referenced, and downloaded as File variables within workflow applications. **This empowers developers to easily construct complex workflows capable of understanding and processing various media types, including images, audio, and video.** @@ -34,66 +34,111 @@ Both file upload and knowledge base provide additional contextual information fo * File Upload: Typically for temporary use, not stored long-term in the system. * Knowledge Base: Exists as a long-term part of the application, can be continuously updated and maintained. -## Quick Start +## Quick Start: Building a Chatflow / Workflow Application with File Upload Feature -Dify supports file uploads in both [ChatFlow](/en-us/user-guide/build-app/flow-app/create-flow-app#chatflow) and [WorkFlow](/en-us/user-guide/build-app/flow-app/create-flow-app#workflow) type applications, processing them through variables for LLMs. Application developers can refer to the following methods to enable file upload functionality: +Dify supports file uploads in both [ChatFlow](key-concepts.md) and [WorkFlow](key-concepts.md#chatflow-and-workflow) type applications, processing them through variables for LLMs. Application developers can refer to the following methods to enable file upload functionality: * In Workflow applications: - * Add file variables in the ["Start Node"](/en-us/user-guide/build-app/flow-app/nodes/start) + * Add file variables in the ["Start Node"](node/start.md) * In ChatFlow applications: - * Enable file upload in ["Additional Features"](/en-us/user-guide/build-app/flow-app/additional-features) to allow direct file uploads in the chat window - * Add file variables in the "[Start Node"](/en-us/user-guide/build-app/flow-app/nodes/start) + * Enable file upload in ["Additional Features"](additional-features.md) to allow direct file uploads in the chat window + * Add file variables in the "[Start Node"](node/start.md) * Note: These two methods can be configured simultaneously and are independent of each other. The file upload settings in additional features (including upload method and quantity limit) do not affect the file variables in the start node. For example, if you only want to create file variables through the start node, you don't need to enable the file upload feature in additional features. These two methods provide flexible file upload options for applications to meet the needs of different scenarios. **File Types** -| File Type | Supported Formats | -|-----------|------------------| -| Documents | TXT, MARKDOWN, PDF, HTML, XLSX, XLS, DOCX, CSV, EML, MSG, PPTX, PPT, XML, EPUB | -| Images | JPG, JPEG, PNG, GIF, WEBP, SVG | -| Audio | MP3, M4A, WAV, WEBM, AMR | -| Video | MP4, MOV, MPEG, MPGA | -| Others | Custom file extension support | +`File` variables and `array[file]` variables support the following file types and formats: -**Method 1: Enable File Upload in Application Chat Box (Chatflow Only)** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
File TypeSupported Formats
DocumentsTXT, MARKDOWN, PDF, HTML, XLSX, XLS, DOCX, CSV, EML, MSG, PPTX, PPT, XML, EPUB.
ImagesJPG, JPEG, PNG, GIF, WEBP, SVG.
AudioMP3, M4A, WAV, WEBM, AMR.
VideoMP4, MOV, MPEG, MPGA.
OthersCustom file extension support
+ +#### Method 1: Using an LLM with File Processing Capabilities + +Some LLMs, such as [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support), now support direct processing and analysis of file content, enabling the use of file variables in the LLM node's prompts. + +> To prevent potential issues, application developers should verify the supported file types on the LLM's official website before utilizing the file variable. + +1. Click to create a Chatflow or Workflow application. +2. Add an LLM node and select an LLM with file processing capabilities. +3. Add a file variable in the start node. +4. Enter the file variable in the system prompt of the LLM node. +5. Complete the setup. + +![](https://assets-docs.dify.ai/2024/11/a7154e8966d979dcba13eac0a172ef89.png) + +**Method 2: Enable File Upload in Application Chat Box (Chatflow Only)** 1. Click the **"Features"** button in the upper right corner of the Chatflow application to add more functionality to the application. After enabling this feature, application users can upload and update files at any time during the application dialogue. A maximum of 10 files can be uploaded simultaneously, with a size limit of 15MB per file. - - File upload feature - +![file upload](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/20784cfa167417654a69c10e42e8271b.png) Enabling this feature does not grant LLMs the ability to directly read files. A **Document Extractor** is still needed to parse documents into text for LLM comprehension. * For audio files, models like `gpt-4o-audio-preview` that support multimodal input can process audio directly without additional extractors. -* For video and other file types, there are currently no corresponding extractors. Application developers need to [integrate external tools](/en-us/user-guide/tools/extensions/api-based/external-data-tool) for processing. +* For video and other file types, there are currently no corresponding extractors. Application developers need to [integrate external tools](../extension/api-based-extension/external-data-tool.md) for processing. 2. Add a Document Extractor node, and select the `sys.files` variable in the input variables. 3. Add an LLM node and select the output variable of the Document Extractor node in the system prompt. 4. Add an "Answer" node at the end, filling in the output variable of the LLM node. - - Document extractor workflow - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/6802747ef5de4772ce8d05f6c0a23130.png) Once enabled, users can upload files and engage in conversations in the dialogue box. However, with this method, the LLM application does not have the ability to remember file contents, and files need to be uploaded for each conversation. - - Chat interface - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/b18af11da3f339c496193d9732906849.png) -If you want the LLM to remember file contents during conversations, please refer to Method 2. +If you want the LLM to remember file contents during conversations, please refer to Method 3. -**Method 2: Enable File Upload by Adding File Variables** +**Method 3: Enable File Upload by Adding File Variables** **1. Add File Variables in the "Start" Node** Add input fields in the application's "Start" node, choosing either **"Single File"** or **"File List"** as the field type for the variable. -* **Single File**: Allows the application user to upload only one file. -* **File List**: Allows the application user to batch upload multiple files at once. + + +* **Single File** + + Allows the application user to upload only one file. +* **File List** + + Allows the application user to batch upload multiple files at once. > For ease of operation, we will use a single file variable as an example. @@ -103,49 +148,47 @@ There are two main ways to use file variables: 1. Using tool nodes to convert file content: * For document-type files, you can use the "Document Extractor" node to convert file content into text form. - * This method is suitable for cases where file content needs to be parsed into a format that the model can understand (such as string, array[string], etc.). + * This method is suitable for cases where file content needs to be parsed into a format that the model can understand (such as string, array\[string], etc.). 2. Using file variables directly in LLM nodes: * For certain types of files (such as images), you can use file variables directly in LLM nodes. * For example, for file variables of image type, you can enable the vision feature in the LLM node and then directly reference the corresponding file variable in the variable selector. +The choice between these methods depends on the file type and your specific requirements. Next, we will detail the specific steps for both methods. + **2. Add Document Extractor Node** After uploading, files are stored in single file variables, which LLMs cannot directly read. Therefore, a **"Document Extractor"** node needs to be added first to extract content from uploaded document files and send it to the LLM node for information processing. - - Document Extractor configuration - +Use the file variable from the "Start" node as the input variable for the **"Document Extractor"** node. + +![Document Extractor](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/8e6a3deaaa5eebeb66f9e1d844dc1ec6.png) Fill in the output variable of the "Document Extractor" node in the system prompt of the LLM node. - - LLM node configuration - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/f6ea094b30b240c999a4248d1fc21a1c.png) + +After completing these settings, application users can paste file URLs or upload local files in the WebApp, then interact with the LLM about the document content. Users can replace files at any time during the conversation, and the LLM will obtain the latest file content. **Referencing File Variables in LLM Nodes** For certain file types (such as images), file variables can be directly used within LLM nodes. This method is particularly suitable for scenarios requiring visual analysis. Here are the specific steps: -1. In the LLM node, enable the vision functionality. -2. In the variable selector of the LLM node, directly reference the previously created file variable. -3. In the system prompt, guide the model on how to process the image input. +1. In the LLM node, enable the vision functionality. This allows the model to process image inputs (the model must support vision capabilities). +2. In the variable selector of the LLM node, directly reference the previously created file variable. If file upload was enabled through additional features, select the `sys.files` variable. +3. In the system prompt, guide the model on how to process the image input. For example, you can instruct the model to describe the image content or answer questions about the image. - - LLM node with vision - +Below is an example configuration: + +![Using file variables directly in LLM node](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/f8980732757050049a430eea934511cf.png) + +It's important to note that when directly using file variables in LLM node, the developers need to ensure that the file variable contains only image files; otherwise, errors may occur. If users might upload different types of files, we need to use list operator node for filtering files. **File Download** -Placing file variables in answer nodes or end nodes will provide a file download card in the conversation box when the application reaches that node. +Placing file variables in answer nodes or end nodes will provide a file download card in the conversation box when the application reaches that node. Clicking the card allows for file download. - - File download interface - +![file download](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/bc39be7fd4879b3875e6b77daaff5d37.png) ## Advanced Usage If you want the application to support uploading multiple types of files, such as allowing users to upload document files, images, and audio/video files simultaneously, you need to add a "File List" variable in the "Start Node" and use the "List Operation" node to process different file types. For detailed instructions, please refer to the List Operation node. - - - Diagram of workflow for processing multiple file types - diff --git a/en/user-guide/build-app/flow-app/concepts.mdx b/en/guides/workflow/key-concepts.mdx similarity index 53% rename from en/user-guide/build-app/flow-app/concepts.mdx rename to en/guides/workflow/key-concepts.mdx index 432298b2..89aa20b4 100644 --- a/en/user-guide/build-app/flow-app/concepts.mdx +++ b/en/guides/workflow/key-concepts.mdx @@ -1,19 +1,19 @@ --- -title: Concepts -version: 'English' +title: Key Concepts --- + ### Nodes **Nodes are the key components of a workflow**. By connecting nodes with different functionalities, you can execute a series of operations within the workflow. -For core workflow nodes, please refer to [Node - Start](/en-us/user-guide/build-app/flow-app/nodes/start). +For core workflow nodes, please refer to [Block Description](node/). *** ### Variables -**Variables are used to link the input and output of nodes within a workflow**, enabling complex processing logic throughout the process. Fore more details, please take refer to [Variables](/en-us/user-guide/build-app/flow-app/variables). +**Variables are used to link the input and output of nodes within a workflow**, enabling complex processing logic throughout the process. Fore more details, please take refer to [Variables](variables.md). *** @@ -26,13 +26,22 @@ For core workflow nodes, please refer to [Node - Start](/en-us/user-guide/build- **Usage Entry Points** -![Chatflow Entry](/en-us/images/assets/chatflow.png) - -![Workflow Entry](/images/assets/workflow.png) +
+ + +
**Differences in Available Nodes** -1. The End node is an ending node for Workflow and can only be selected at the end of the process. -2. The Answer node is specific to Chatflow, used for streaming text output, and can output at intermediate steps in the process. +1. The [End node](node/end.md) is an ending node for Workflow and can only be selected at the end of the process. +2. The [Answer node](node/answer.md) is specific to Chatflow, used for streaming text output, and can output at intermediate steps in the process. 3. Chatflow has built-in chat memory (Memory) for storing and passing multi-turn conversation history, which can be enabled in nodes like LLM and question classifiers. Workflow does not have Memory-related configurations and cannot enable them. -4. Built-in variables for Chatflow's start node include: `sys.query`, `sys.files`, `sys.conversation_id`, `sys.user_id`. Built-in variables for Workflow's start node include: `sys.files`, `sys_id`. +4. Built-in variables for Chatflow's [start node](node/start.md) include: `sys.query`, `sys.files`, `sys.conversation_id`, `sys.user_id`. Built-in [variables](variables.md) for Workflow's start node include: `sys.files`, `sys_id`. diff --git a/en/user-guide/build-app/flow-app/nodes/README.md b/en/guides/workflow/nodes/README.mdx similarity index 82% rename from en/user-guide/build-app/flow-app/nodes/README.md rename to en/guides/workflow/nodes/README.mdx index 393d4d1e..dd09b4c0 100644 --- a/en/user-guide/build-app/flow-app/nodes/README.md +++ b/en/guides/workflow/nodes/README.mdx @@ -1,7 +1,10 @@ -# Node Description +--- +title: Node Description +--- + **Nodes are the key components of a workflow**, enabling the execution of a series of operations by connecting nodes with different functionalities. ### Core Nodes -
StartDefines the initial parameters for starting a workflow process.
EndDefines the final output content for ending a workflow process.
AnswerDefines the response content in a Chatflow process.
Large Language Model (LLM)Calls a large language model to answer questions or process natural language.
Knowledge RetrievalRetrieves text content related to user questions from a knowledge base, which can serve as context for downstream LLM nodes.
Question ClassifierBy defining classification descriptions, the LLM can select the matching classification based on user input.
IF/ELSEAllows you to split the workflow into two branches based on if/else conditions.
Code ExecutionRuns Python/NodeJS code to execute custom logic such as data transformation within the workflow.
TemplateEnables flexible data transformation and text processing using Jinja2, a Python templating language.
Variable AggregatorAggregates variables from multiple branches into one variable for unified configuration of downstream nodes.
Variable AssignerThe variable assigner node is used to assign values to writable variables.
Parameter ExtractorUses LLM to infer and extract structured parameters from natural language for subsequent tool calls or HTTP requests.
IterationExecutes multiple steps on list objects until all results are output.
HTTP RequestAllows sending server requests via the HTTP protocol, suitable for retrieving external results, webhooks, generating images, and other scenarios.
ToolsEnables calling built-in Dify tools, custom tools, sub-workflows, and more within the workflow.
+
StartDefines the initial parameters for starting a workflow process.
EndDefines the final output content for ending a workflow process.
AnswerDefines the response content in a Chatflow process.
Large Language Model (LLM)Calls a large language model to answer questions or process natural language.
Knowledge RetrievalRetrieves text content related to user questions from a knowledge base, which can serve as context for downstream LLM nodes.
Question ClassifierBy defining classification descriptions, the LLM can select the matching classification based on user input.
IF/ELSEAllows you to split the workflow into two branches based on if/else conditions.
Code ExecutionRuns Python/NodeJS code to execute custom logic such as data transformation within the workflow.
TemplateEnables flexible data transformation and text processing using Jinja2, a Python templating language.
Variable AggregatorAggregates variables from multiple branches into one variable for unified configuration of downstream nodes.
Variable AssignerThe variable assigner node is used to assign values to writable variables.
Parameter ExtractorUses LLM to infer and extract structured parameters from natural language for subsequent tool calls or HTTP requests.
IterationExecutes multiple steps on list objects until all results are output.
HTTP RequestAllows sending server requests via the HTTP protocol, suitable for retrieving external results, webhooks, generating images, and other scenarios.
ToolsEnables calling built-in Dify tools, custom tools, sub-workflows, and more within the workflow.
LoopA Loop node executes repetitive tasks that depend on previous iteration results until exit conditions are met or the maximum loop count is reached.
diff --git a/en/guides/workflow/nodes/agent.mdx b/en/guides/workflow/nodes/agent.mdx new file mode 100644 index 00000000..c94c9bf4 --- /dev/null +++ b/en/guides/workflow/nodes/agent.mdx @@ -0,0 +1,78 @@ +--- +title: Agent +--- + + +## Definition + +An Agent Node is a component in Dify Chatflow/Workflow that enables autonomous tool invocation. By integrating different Agent reasoning strategies, LLMs can dynamically select and execute tools at runtime, thereby performing multi-step reasoning. + +## Configuration Steps + +### Add the Node + +In the Dify Chatflow/Workflow editor, drag the Agent node from the components panel onto the canvas. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/1f4d803ff68394d507abd3bcc13ba0f3.png) + +### Select an Agent Strategy + +In the node configuration panel, click Agent Strategy. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/f14082c44462ac03955e41d66ffd4cca.png) + +From the dropdown menu, select the desired Agent reasoning strategy. Dify provides two built-in strategies, **Function Calling and ReAct**, which can be installed from the **Marketplace → Agent Strategies category**. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/47e29e5993821b61632af9cdb8392357.png) + +#### 1. Function Calling + +Function Calling maps user commands to predefined functions or tools. The LLM first identifies user intent, then decides which function to call and extracts the required parameters. Its core mechanism involves explicitly calling external functions or tools. + +Pros: + +**• Precision:** For well-defined tasks, it can call the corresponding tool directly without requiring complex reasoning. + +**• Easier external feature integration:** Various external APIs or tools can be wrapped into functions for the model to call. + +**• Structured output:** The model outputs structured information about function calls, facilitating processing by downstream nodes. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/10505cd7c6f0b3ba10161abb88d9e36b.png) + +#### 2. ReAct (Reason + Act) + +ReAct enables the Agent to alternate between reasoning and taking action: the LLM first thinks about the current state and goal, then selects and calls the appropriate tool. The tool’s output in turn informs the LLM’s next step of reasoning and action. This cycle continues until the problem is resolved. + +Pros: + +**• Effective external information use:** It can leverage external tools to retrieve information and handle tasks that the model alone cannot accomplish. + +**• Improved explainability:** Because reasoning and actions are interwoven, there is a certain level of traceability in the Agent’s thought process. + +**• Wide applicability:** Suitable for scenarios that require external knowledge or need to perform specific actions, such as Q\&A, information retrieval, and task execution. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/60fa430029e509ac1a609c72fd04c413.png) + +Developers can contribute Agent strategy plugins to the public [repository](https://github.com/langgenius/dify-plugins). After review, these plugins will be listed in the Marketplace for others to install. + +### Configure Node Parameters + +After choosing the Agent strategy, the configuration panel will display the relevant options. For the Function Calling and ReAct strategies that ship with Dify, the available configuration items include: + +1. **Model:** Select the large language model that drives the Agent. +2. **Tools List:** The approach to using tools is defined by the Agent strategy. Click + to add and configure tools the Agent can call. + * Search: Select an installed tool plugin from the dropdown. + * Authorization: Provide API keys and other credentials to enable the tool. + * Tool Description and Parameter Settings: Provide a description to help the LLM understand when and why to use the tool, and configure any functional parameters. +3. **Instruction**: Define the Agent’s task goals and context. Jinja syntax is supported to reference upstream node variables. +4. **Query**: Receives user input. +5. **Maximum Iterations:** Set the maximum number of execution steps for the Agent. +6. **Output Variables:** Indicates the data structure output by the node. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/54c8e4f0eaa7379bd8c1b5ac6305b326.png) + +## Logs + +During execution, the Agent node generates detailed logs. You can see overall node execution information—including inputs and outputs, token usage, time spent, and status. Click Details to view the output from each round of Agent strategy execution. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/0120d3d0e63f5b59ec9d279a38c970ef.png) diff --git a/en/user-guide/build-app/flow-app/nodes/answer.mdx b/en/guides/workflow/nodes/answer.mdx similarity index 61% rename from en/user-guide/build-app/flow-app/nodes/answer.mdx rename to en/guides/workflow/nodes/answer.mdx index c11616ba..4d6e0246 100644 --- a/en/user-guide/build-app/flow-app/nodes/answer.mdx +++ b/en/guides/workflow/nodes/answer.mdx @@ -1,9 +1,7 @@ --- -title: 'Answer' -version: 'English' +title: Answer --- -# Direct Reply Defining Reply Content in a Chatflow Process. In a text editor, you have the flexibility to determine the reply format. This includes crafting a fixed block of text, utilizing output variables from preceding steps as the reply content, or merging custom text with variables for the response. @@ -15,10 +13,19 @@ Answer node can be seamlessly integrated at any point to dynamically deliver con Example 1: Output plain text. -![](/images/assets/answer-plain-text.png) +
+ + +
diff --git a/en/user-guide/build-app/flow-app/nodes/code.mdx b/en/guides/workflow/nodes/code.mdx similarity index 74% rename from en/user-guide/build-app/flow-app/nodes/code.mdx rename to en/guides/workflow/nodes/code.mdx index d6df658d..4ba3e28f 100644 --- a/en/user-guide/build-app/flow-app/nodes/code.mdx +++ b/en/guides/workflow/nodes/code.mdx @@ -2,12 +2,13 @@ title: Code Execution --- + ## Table of Contents -* [Introduction](#introduction) -* [Usage Scenarios](#usage-scenarios) -* [Local Deployment](#local-deployment) -* [Security Policies](#security-policies) +* [Introduction](code.md#introduction) +* [Usage Scenarios](code.md#usage-scenarios) +* [Local Deployment](code.md#local-deployment) +* [Security Policies](code.md#security-policies) ## Introduction @@ -15,7 +16,7 @@ The code node supports running Python/NodeJS code to perform data transformation This node significantly enhances the flexibility for developers, allowing them to embed custom Python or JavaScript scripts within the workflow and manipulate variables in ways that preset nodes cannot achieve. Through configuration options, you can specify the required input and output variables and write the corresponding execution code: -![](/en-us/images/assets/image-(157).png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/9969aa1bc1912aebe366f5d8f5dde296.png) ## Configuration @@ -75,6 +76,28 @@ docker-compose -f docker-compose.middleware.yaml up -d Both Python and JavaScript execution environments are strictly isolated (sandboxed) to ensure security. This means that developers cannot use functions that consume large amounts of system resources or may pose security risks, such as direct file system access, making network requests, or executing operating system-level commands. These limitations ensure the safe execution of the code while avoiding excessive consumption of system resources. +### Advanced Features + +**Retry on Failure** + +For some exceptions that occur in the node, it is usually sufficient to retry the node again. When the error retry function is enabled, the node will automatically retry according to the preset strategy when an error occurs. You can adjust the maximum number of retries and the interval between each retry to set the retry strategy. + +- The maximum number of retries is 10 +- The maximum retry interval is 5000 ms + +![](https://assets-docs.dify.ai/2024/12/9fdd5525a91dc925b79b89272893becf.png) + +**Error Handling** + +When processing information, code nodes may encounter code execution exceptions. Developers can follow these steps to configure fail branches, enabling contingency plans when nodes encounter exceptions, thus avoiding workflow interruptions. + +1. Enable "Error Handling" in the code node +2. Select and configure an error handling strategy + +![Code Error handling](https://assets-docs.dify.ai/2024/12/58f392734ce44b22cd8c160faf28cd14.png) + +For more information about exception handling approaches, please refer to [Error Handling](https://docs.dify.ai/zh-hans/guides/workflow/error-handling). + ### FAQ **Why can't I save the code it in the code node?** @@ -95,4 +118,5 @@ This code snippet has the following issues: Dangerous code will be automatically blocked by Cloudflare WAF. You can check if it's been blocked by looking at the "Network" tab in your browser's "Web Developer Tools". +![Cloudflare WAF](https://assets-docs.dify.ai/2024/12/ad4dc065c4c567c150ab7fa7bfd123a3.png) diff --git a/en/user-guide/build-app/flow-app/nodes/doc-extractor.mdx b/en/guides/workflow/nodes/doc-extractor.mdx similarity index 80% rename from en/user-guide/build-app/flow-app/nodes/doc-extractor.mdx rename to en/guides/workflow/nodes/doc-extractor.mdx index d8b8ba9e..4452c879 100644 --- a/en/user-guide/build-app/flow-app/nodes/doc-extractor.mdx +++ b/en/guides/workflow/nodes/doc-extractor.mdx @@ -2,6 +2,7 @@ title: Doc Extractor --- + #### Definition LLMs cannot directly read or interpret document contents. Therefore, it's necessary to parse and read information from user-uploaded documents through a document extractor node, convert it to text, and then pass the content to the LLM to process the file contents. @@ -15,7 +16,7 @@ LLMs cannot directly read or interpret document contents. Therefore, it's necess The document extractor node can be understood as an information processing center. It recognizes and reads files in the input variables, extracts information, and converts it into string-type output variables for downstream nodes to call. -![](/en-us/images/assets/image-(11).png) +![doc extractor](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/f3853b40904e275da895711107e9c72f.png) The document extractor node structure is divided into input variables and output variables. @@ -43,7 +44,7 @@ In a typical file interaction Q\&A scenario, the document extractor can serve as This section will introduce the usage of the document extractor node through a typical ChatPDF example workflow template. -![](/en-us/images/assets/image-(12).png) +![Chatpdf workflow](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/f6ea094b30b240c999a4248d1fc21a1c.png) **Configuration Process:** @@ -51,12 +52,14 @@ This section will introduce the usage of the document extractor node through a t 2. Add a document extractor node and select the `pdf` variable in the input variables. 3. Add an LLM node and select the output variable of the document extractor node in the system prompt. The LLM can read the contents of the file through this output variable. -![](/en-us/images/assets/image-(13).png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/83bca46bcde07069660ff649e5c7cf4c.png) Configure the end node by selecting the output variable of the LLM node in the end node. -![](/en-us/images/assets/image-(14).png) +![chat with pdf](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/d05301438e8aab7393bb5863554f1009.png) After configuration, the application will have file upload functionality, allowing users to upload PDF files and engage in conversation. -To learn how to upload files in chat conversations and interact with the LLM, please refer to Additional Features. + +To learn how to upload files in chat conversations and interact with the LLM, please refer to Additional Features. + diff --git a/en/guides/workflow/nodes/end.mdx b/en/guides/workflow/nodes/end.mdx new file mode 100644 index 00000000..f1602526 --- /dev/null +++ b/en/guides/workflow/nodes/end.mdx @@ -0,0 +1,32 @@ +--- +title: End +--- + + +### 1 Definition + +Define the final output content of a workflow. Every workflow needs at least one end node after complete execution to output the final result. + +The end node is a termination point in the process; no further nodes can be added after it. In a workflow application, results are only output when the end node is reached. If there are conditional branches in the process, multiple end nodes need to be defined. + +The end node must declare one or more output variables, which can reference any upstream node's output variables. + + +End nodes are not supported within Chatflow. + + +*** + +### 2 Scenarios + +In the following [long story generation workflow](iteration.md#example-2-long-article-iterative-generation-another-scheduling-method), the variable `Output` declared by the end node is the output of the upstream code node. This means the workflow will end after the Code node completes execution and will output the execution result of Code. + +![End Node - Long Story Generation Example](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/a103792790447c1725c1da1176334cae.png) + +**Single Path Execution Example:** + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/9e43961344d318e09af8d64464d81774.png) + +**Multi-Path Execution Example:** + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/3cb3f5fea376265bede0a4ac5bcc1ddc.png) diff --git a/en/user-guide/build-app/flow-app/nodes/http-request.mdx b/en/guides/workflow/nodes/http-request.mdx similarity index 60% rename from en/user-guide/build-app/flow-app/nodes/http-request.mdx rename to en/guides/workflow/nodes/http-request.mdx index c75722aa..9f78d99c 100644 --- a/en/user-guide/build-app/flow-app/nodes/http-request.mdx +++ b/en/guides/workflow/nodes/http-request.mdx @@ -2,6 +2,7 @@ title: HTTP Request --- + ### Definition Allows sending server requests via the HTTP protocol, suitable for scenarios such as retrieving external data, webhooks, generating images, and downloading files. It enables you to send customized HTTP requests to specified web addresses, achieving interconnectivity with various external services. @@ -17,7 +18,7 @@ This node supports common HTTP request methods: You can configure various aspects of the HTTP request, including URL, request headers, query parameters, request body content, and authentication information. -![](/images/assets/workflow-http-request-node.png) +![HTTP Request Configuration](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/07c5e952eb4c9d6a32d0b7c2d855d4a5.png) *** @@ -27,7 +28,7 @@ You can configure various aspects of the HTTP request, including URL, request he One practical feature of this node is the ability to dynamically insert variables into different parts of the request based on the scenario. For example, when handling customer feedback requests, you can embed variables such as username or customer ID, feedback content, etc., into the request to customize automated reply messages or fetch specific customer information and send related resources to a designated server. -![](/images/assets/customer-feedback-classification.png) +![Customer Feedback Classification](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/090975269f8998f906c5636dde8d9540.png) The return values of an HTTP request include the response body, status code, response headers, and files. Notably, if the response contains a file, this node can automatically save the file for use in subsequent steps of the workflow. This design not only improves processing efficiency but also makes handling responses with files straightforward and direct. @@ -39,5 +40,26 @@ Example: Suppose you are developing a document management application and need t Here is a configuration example: -![](/images/assets/image-(145).png) +![http-node-send-file](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/1f2e33cf7bed33096b5aee145006193d.png) +### Advanced Features + +**Retry on Failure** + +For some exceptions that occur in the node, it is usually sufficient to retry the node again. When the error retry function is enabled, the node will automatically retry according to the preset strategy when an error occurs. You can adjust the maximum number of retries and the interval between each retry to set the retry strategy. + +- The maximum number of retries is 10 +- The maximum retry interval is 5000 ms + +![](https://assets-docs.dify.ai/2024/12/2e7c6080c0875e31a074c2a9a4543797.png) + +**Error Handling** + +When processing information, HTTP nodes may encounter exceptional situations such as network request timeouts or request limits. Application developers can follow these steps to configure fail branches, enabling contingency plans when nodes encounter exceptions and avoiding workflow interruptions. + +1. Enable "Error Handling" in the HTTP node +2. Select and configure an error handling strategy + +For more information about exception handling approaches, please refer to [Error Handling](https://docs.dify.ai/zh-hans/guides/workflow/error-handling). + +![](https://assets-docs.dify.ai/2024/12/91daa86d9770390ab2a41d6d0b6ed1e7.png) \ No newline at end of file diff --git a/en/user-guide/build-app/flow-app/nodes/ifelse.mdx b/en/guides/workflow/nodes/ifelse.mdx similarity index 88% rename from en/user-guide/build-app/flow-app/nodes/ifelse.mdx rename to en/guides/workflow/nodes/ifelse.mdx index 24259e4c..4c8783e3 100644 --- a/en/user-guide/build-app/flow-app/nodes/ifelse.mdx +++ b/en/guides/workflow/nodes/ifelse.mdx @@ -2,6 +2,7 @@ title: Conditional Branch IF/ELSE --- + ### Definition Allows you to split the workflow into two branches based on if/else conditions. @@ -29,7 +30,7 @@ A conditional branching node has three parts: ### Scenario -![](/images/assets/if-else-elif.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/d26ffff1b2ad0989d46e80d6812cf2e7.png) Taking the above **Text Summary Workflow** as an example: @@ -43,4 +44,4 @@ Taking the above **Text Summary Workflow** as an example: For complex condition judgments, you can set multiple condition judgments and configure **AND** or **OR** between conditions to take the **intersection** or **union** of the conditions, respectively. -![](/en-us/images/assets/mutliple-judgement-(1).png) +![Multiple Condition Judgments](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/0b71ee7363e07298348e0c81e63481b0.png) diff --git a/en/guides/workflow/nodes/iteration.mdx b/en/guides/workflow/nodes/iteration.mdx new file mode 100644 index 00000000..23419a6b --- /dev/null +++ b/en/guides/workflow/nodes/iteration.mdx @@ -0,0 +1,189 @@ +--- +title: Iteration +--- + + +### Definition + +Sequentially performs the same operations on array elements until all results are outputted, functioning as a task batch processor. Iteration nodes typically work in conjunction with array variables. + +For example, when processing long text translations, inputting all content directly into an LLM node may reach the single conversation limit. To address the issue, upstream nodes first split the long text into multiple chunks, then use iteration nodes to perform batch translations, thus avoiding the message limit of a single LLM conversation. + +*** + +### Functional Description + +Using iteration nodes requires input values to be formatted as list objects. The node sequentially processes all elements in the array variable from the iteration start node, applying identical processing steps to each element. Each processing cycle is called an iteration, culminating in the final output. + +An iteration node consists of three core components: **Input Variables**, **Iteration Workflow**, and **Output Variables**. + +**Input Variables:** Accepts only Array type data. + +**Iteration Workflow:** Supports multiple workflow nodes to orchestrate task sequences within the iteration node. + +**Output Variables:** Outputs only array variables (`Array[List]`). + +
+ + +
+ +* [Parameter Extraction](parameter-extractor.md) + +![Parameter Extraction](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/b5a9d4bee95d7a1331bb7ff7433e47a3.png) + +* [Knowledge Base Retrieval](knowledge-retrieval.md) +* [Iteration](iteration.md) +* [Tools](tools.md) +* [HTTP Request](http-request.md) + +*** + +#### How to Convert an Array to Text + +The output variable of the iteration node is in array format and cannot be directly output. You can use a simple step to convert the array back to text. + +**Convert Using a Code Node** + +![Code Node Conversion](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/8be2372b00a802e981efe6f0ceff815b.png) + +CODE Example: + +```python +def main(articleSections: list): + data = articleSections + return { + "result": "/n".join(data) + } +``` + +**Convert Using a Template Node** + +![Template Node Conversion](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/8c0bcc5de453dea2776d2755449bd971.png) + +CODE Example: + +```django +{{ articleSections | join("/n") }} +``` diff --git a/en/user-guide/build-app/flow-app/nodes/knowledge-retrieval.mdx b/en/guides/workflow/nodes/knowledge-retrieval.mdx similarity index 87% rename from en/user-guide/build-app/flow-app/nodes/knowledge-retrieval.mdx rename to en/guides/workflow/nodes/knowledge-retrieval.mdx index d238ab26..f38a2fd1 100644 --- a/en/user-guide/build-app/flow-app/nodes/knowledge-retrieval.mdx +++ b/en/guides/workflow/nodes/knowledge-retrieval.mdx @@ -2,12 +2,10 @@ title: Knowledge Retrieval --- + The Knowledge Base Retrieval Node is designed to query text content related to user questions from the Dify Knowledge Base, which can then be used as context for subsequent answers by the Large Language Model (LLM). - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/d90961c6d794d425a8e11df177315188.png) Configuring the Knowledge Base Retrieval Node involves four main steps: @@ -33,13 +31,11 @@ Use **Metadata Filtering** to refine document search in your knowledge base. For It's possible to modify the indexing strategy and retrieval mode for an individual knowledge base within the node. For a detailed explanation of these settings, refer to the knowledge base [help documentation](https://docs.dify.ai/guides/knowledge-base/retrieval-test-and-citation). Dify offers two recall strategies for different knowledge base retrieval scenarios: "N-to-1 Recall" and "Multi-way Recall". In the N-to-1 mode, knowledge base queries are executed through function calling, requiring the selection of a system reasoning model. In the multi-way recall mode, a Rerank model needs to be configured for result re-ranking. For a detailed explanation of these two recall strategies, refer to the retrieval mode explanation in the [help documentation](https://docs.dify.ai/guides/knowledge-base/create-knowledge-and-upload-documents#id-5-indexing-methods). - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/4a3007cda9dfa50ddac3711693725dce.png) diff --git a/en/user-guide/build-app/flow-app/nodes/list-operator.mdx b/en/guides/workflow/nodes/list-operator.mdx similarity index 90% rename from en/user-guide/build-app/flow-app/nodes/list-operator.mdx rename to en/guides/workflow/nodes/list-operator.mdx index 354a52ef..04e55bc6 100644 --- a/en/user-guide/build-app/flow-app/nodes/list-operator.mdx +++ b/en/guides/workflow/nodes/list-operator.mdx @@ -2,6 +2,7 @@ title: List Operator --- + File list variables support simultaneous uploading of multiple file types such as document files, images, audio, and video files. When application users upload files, all files are stored in the same `Array[File]` array-type variable, which **is not conducive to subsequent individual file processing.** > The `Array` data type means that the actual value of the variable could be \[1.mp3, 2.png, 3.doc]. LLMs only support reading single values such as image files or text content as input variables and cannot directly read array variables. @@ -12,11 +13,11 @@ The list operator can filter and extract attributes such as file format type, fi For example, in an application that allows users to upload both document files and image files simultaneously, different files need to be sorted through the **list operation node**, with different files being handled by different processes. -![](/images/assets/image-(123).png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/522a0c932aab93d4f3970168412f759e.png) List operation nodes are generally used to extract information from array variables, converting them into variable types that can be accepted by downstream nodes through setting conditions. Its structure is divided into input variables, filter conditions, sorting, taking the first N items, and output variables. -![](/images/assets/image-(132).png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/812d1b2f167065e17df8392b2cb3cc8a.png) **Input Variables** @@ -69,11 +70,11 @@ Array elements that meet all filter conditions. Filter conditions, sorting, and In file interaction Q\&A scenarios, application users may upload document files or image files simultaneously. LLMs only support the ability to recognize image files and do not support reading document files. At this time, the List Operation node is needed to preprocess the array of file variables and send different file types to corresponding processing nodes. The orchestration steps are as follows: -1. Enable the [Features](/en-us/user-guide/build-app/flow-app/additional-features) function and check both "Images" and "Document" types in the file types. +1. Enable the [Features](../additional-features.md) function and check both "Images" and "Document" types in the file types. 2. Add two list operation nodes, setting to extract image and document variables respectively in the "List Operator" conditions. 3. Extract document file variables and pass them to the "Doc Extractor" node; extract image file variables and pass them to the "LLM" node. 4. Add a "Answer" node at the end, filling in the output variable of the LLM node. -![](/images/assets/image-(133).png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/610358293217e54b55b7e1d4d16bf83c.png) After the application user uploads both document files and images, document files are automatically diverted to the doc extractor node, and image files are automatically diverted to the LLM node to achieve joint processing of mixed files. diff --git a/en/user-guide/build-app/flow-app/nodes/llm.mdx b/en/guides/workflow/nodes/llm.mdx similarity index 64% rename from en/user-guide/build-app/flow-app/nodes/llm.mdx rename to en/guides/workflow/nodes/llm.mdx index 6003f5de..dd31a288 100644 --- a/en/user-guide/build-app/flow-app/nodes/llm.mdx +++ b/en/guides/workflow/nodes/llm.mdx @@ -1,13 +1,13 @@ --- title: LLM -version: 'English' --- + ### Definition Invokes the capabilities of large language models to process information input by users in the "Start" node (natural language, uploaded files, or images) and provide effective response information. -![LLM Node](/images/assets/llm-node-1.png) +![LLM Node](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/85730fbfa1d441d12d969b89adf2670e.png) *** @@ -22,6 +22,7 @@ LLM is the core node of Chatflow/Workflow, utilizing the conversational/generati * **Code Generation**: In programming assistance scenarios, generating specific business code or writing test cases based on user requirements. * **RAG**: In knowledge base Q\&A scenarios, reorganizing retrieved relevant knowledge to respond to user questions. * **Image Understanding**: Using multimodal models with vision capabilities to understand and answer questions about the information within images. +* **File Analysis**: In file processing scenarios, use LLMs to recognize and analyze the information contained within files. By selecting the appropriate model and writing prompts, you can build powerful and reliable solutions within Chatflow/Workflow. @@ -29,7 +30,7 @@ By selecting the appropriate model and writing prompts, you can build powerful a ### How to Configure -![LLM Node Configuration - Model Selection](/images/assets/llm-node-2.png) +![LLM Node Configuration - Model Selection](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/43f81418ea70d4d79e3705505e777b1b.png) **Configuration Steps:** @@ -38,9 +39,9 @@ By selecting the appropriate model and writing prompts, you can build powerful a 3. **Write Prompts**: The LLM node offers an easy-to-use prompt composition page. Selecting a chat model or completion model will display different prompt composition structures. 4. **Advanced Settings**: You can enable memory, set memory windows, and use the Jinja-2 template language for more complex prompts. - -If you are using Dify for the first time, you need to complete the [model configuration](/en-us/user-guide/models/model-configuration) in **System Settings-Model Providers** before selecting a model in the LLM node. - + +If you are using Dify for the first time, you need to complete the [model configuration](../../model-configuration/) in **System Settings-Model Providers** before selecting a model in the LLM node. + #### **Writing Prompts** @@ -50,11 +51,11 @@ In the LLM node, you can customize the model input prompts. If you select a chat If you're struggling to come up with effective system prompts (System), you can use the Prompt Generator to quickly create prompts suitable for your specific business scenarios, leveraging AI capabilities. -![](/images/assets/en-prompt-generator.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/bec10045f819316f80068c563cf14eb1.png) In the prompt editor, you can call out the **variable insertion menu** by typing `/` or `{` to insert **special variable blocks** or **upstream node variables** into the prompt as context content. -![Calling Out the Variable Insertion Menu](/images/assets/llm-node-3.png) +![Calling Out the Variable Insertion Menu](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/d8ed0160a7fba0a14dd823ef97610cc4.png) *** @@ -64,33 +65,39 @@ In the prompt editor, you can call out the **variable insertion menu** by typing Context variables are a special type of variable defined within the LLM node, used to insert externally retrieved text content into the prompt. -![Context Variables](/images/assets/llm-node-4.png) +![Context Variables](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/5aefed96962bd994f8f05bac96b11e22.png) In common knowledge base Q\&A applications, the downstream node of knowledge retrieval is typically the LLM node. The **output variable** `result` of knowledge retrieval needs to be configured in the **context variable** within the LLM node for association and assignment. After association, inserting the **context variable** at the appropriate position in the prompt can incorporate the externally retrieved knowledge into the prompt. -This variable can be used not only as external knowledge introduced into the prompt context for LLM responses but also supports the application's **citation and attribution** feature due to its data structure containing segment reference information. +This variable can be used not only as external knowledge introduced into the prompt context for LLM responses but also supports the application's [**citation and attribution**](../../knowledge-base/retrieval-test-and-citation#id-2.-citation-and-attribution) feature due to its data structure containing segment reference information. - + If the context variable is associated with a common variable from an upstream node, such as a string type variable from the start node, the context variable can still be used as external knowledge, but the **citation and attribution** feature will be disabled. - + + +**File Variables** + +Some LLMs, such as [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support), now support direct processing of file content, enabling the use of file variables in prompts. To prevent potential issues, application developers should verify the supported file types on the LLM's official website before utilizing the file variable. + +![](https://assets-docs.dify.ai/2024/11/05b3d4a78038bc7afbb157078e3b2b26.png) + +> Refer to [File Upload](https://docs.dify.ai/guides/workflow/file-upload) for guidance on building a Chatflow/Workflow application with file upload functionality. **Conversation History** -To achieve conversational memory in text completion models (e.g., gpt-3.5-turbo-Instruct), Dify designed the conversation history variable in the original Prompt Expert Mode (discontinued). This variable is carried over to the LLM node in Chatflow, used to insert chat history between the AI and the user into the prompt, helping the LLM understand the context of the conversation. +To achieve conversational memory in text completion models (e.g., gpt-3.5-turbo-Instruct), Dify designed the conversation history variable in the original [Prompt Expert Mode (discontinued)](../../../learn-more/extended-reading/prompt-engineering/prompt-engineering-1/). This variable is carried over to the LLM node in Chatflow, used to insert chat history between the AI and the user into the prompt, helping the LLM understand the context of the conversation. - + The conversation history variable is not widely used and can only be inserted when selecting text completion models in Chatflow. - + - - Inserting Conversation History Variable - +![Inserting Conversation History Variable](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/b8642f8c6e3f562fceeefae83628fd68.png) **Model Parameters** The parameters of the model affect the output of the model. Different models have different parameters. The following figure shows the parameter list for `gpt-4`. -![](/en-us/images/assets/llm-img.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/5eaaa3f8082b769544a02ff510b207d8.png) The main parameter terms are explained as follows: @@ -104,7 +111,7 @@ The main parameter terms are explained as follows: If you do not understand what these parameters are, you can choose to load presets and select from the three presets: Creative, Balanced, and Precise. -![](/en-us/images/assets/llm-img-1.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/b913f9cdf1f9b03e791a49836bc770dd.png) *** @@ -118,25 +125,34 @@ If you do not understand what these parameters are, you can choose to load prese **Jinja-2 Templates**: The LLM prompt editor supports Jinja-2 template language, allowing you to leverage this powerful Python template language for lightweight data transformation and logical processing. Refer to the [official documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/). +**Retry on Failure**: For some exceptions that occur in the node, it is usually sufficient to retry the node again. When the error retry function is enabled, the node will automatically retry according to the preset strategy when an error occurs. You can adjust the maximum number of retries and the interval between each retry to set the retry strategy. + +- The maximum number of retries is 10 +- The maximum retry interval is 5000 ms + +![](https://assets-docs.dify.ai/2024/12/dfb43c1cbbf02cdd36f7d20973a5529b.png) + +**Error Handling**: Provides diverse node error handling strategies that can throw error messages when the current node fails without interrupting the main process, or continue completing tasks through backup paths. For detailed information, please refer to the [Error Handling](https://docs.dify.ai/guides/workflow/error-handling). + *** #### Use Cases * **Reading Knowledge Base Content** -To enable workflow applications to read "[Knowledge Base](/en-us/user-guide/knowledge-base/knowledge-base-creation/upload-documents)" content, such as building an intelligent customer service application, please follow these steps: +To enable workflow applications to read "[Knowledge Base](../../knowledge-base/)" content, such as building an intelligent customer service application, please follow these steps: 1. Add a knowledge base retrieval node upstream of the LLM node; 2. Fill in the **output variable** `result` of the knowledge retrieval node into the **context variable** of the LLM node; 3. Insert the **context variable** into the application prompt to give the LLM the ability to read text within the knowledge base. -![](/en-us/images/assets/image-(135).png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/988590f51629f43ac81770396456b372.png) The `result` variable output by the Knowledge Retrieval Node also includes segmented reference information. You can view the source of information through the **Citation and Attribution** feature. - + Regular variables from upstream nodes can also be filled into context variables, such as string-type variables from the start node, but the **Citation and Attribution** feature will be ineffective. - + * **Reading Document Files** @@ -146,7 +162,17 @@ To enable workflow applications to read document contents, such as building a Ch * Add a document extractor node upstream of the LLM node, using the file variable as an input variable; * Fill in the **output variable** `text` of the document extractor node into the prompt of the LLM node. -For more information, please refer to [File Upload](/en-us/user-guide/build-app/flow-app/file-upload). +For more information, please refer to [File Upload](../file-upload.md). -![input system prompts](/en-us/images/assets/image-(137).png) +![input system prompts](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/373ac80deaf7ef9ed77019a94d31bed5.png) +* **Error Handling** + +When processing information, LLM nodes may encounter errors such as input text exceeding token limits or missing key parameters. Developers can follow these steps to configure exception branches, enabling contingency plans when node errors occur to avoid interrupting the entire flow: + +1. Enable "Error Handling" in the LLM node +2. Select and configure an error handling strategy + +![input system prompts](https://assets-docs.dify.ai/2024/12/f7109ce5e87c0e0a81248bb2672c7667.png) + +For more information about exception handling methods, please refer to the [Error Handling](https://docs.dify.ai/guides/workflow/error-handling). diff --git a/en/guides/workflow/nodes/loop.mdx b/en/guides/workflow/nodes/loop.mdx new file mode 100644 index 00000000..1b131773 --- /dev/null +++ b/en/guides/workflow/nodes/loop.mdx @@ -0,0 +1,85 @@ +--- +title: Loop +--- + +## What is Loop Node? + +A **Loop** node executes repetitive tasks that depend on previous iteration results until exit conditions are met or the maximum loop count is reached. + +## Loop vs. Iteration + + + + + + + + + + + + + + + + + + + + + +
TypeDependenciesUse Cases
LoopEach iteration depends on previous resultsRecursive operations, optimization problems
IterationIterations execute independentlyBatch processing, parallel data handling
+ +## Configuration + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionExample
Loop Termination ConditionExpression that determines when to exit the loopx < 50, error_rate < 0.01
Maximum Loop CountUpper limit on iterations to prevent infinite loops10, 100, 1000
+ +![Configuration](https://assets-docs.dify.ai/2025/03/13853bfaaa068cdbdeba1b1f75d482f2.png) + +## Usage Example + +**Goal: Generate random numbers (1-100) until a value below 50 appears.** + +**Steps**: + +1. Use `node` to generate a random number between 1-100. + +2. Use `if` to evaluate the number: + + - If < 50: Output `done` and terminate loop. + + - If ≥ 50: Continue loop and generate another random number. + +3. Set the exit criterion to random_number < 50. + +4. Loop ends when a number below 50 appears. + +![Steps](https://assets-docs.dify.ai/2025/03/b1c277001fc3cb1fbb85fe7c22a6d0fc.png) + +## Planned Enhancements + +**Future releases will include:** + + - Loop variables: Store and reference values across iterations for improved state management and conditional logic. + + - `break` node: Terminate loops from within the execution path, enabling more sophisticated control flow patterns. diff --git a/en/user-guide/build-app/flow-app/nodes/parameter-extractor.mdx b/en/guides/workflow/nodes/parameter-extractor.mdx similarity index 64% rename from en/user-guide/build-app/flow-app/nodes/parameter-extractor.mdx rename to en/guides/workflow/nodes/parameter-extractor.mdx index 786b95f7..f9785895 100644 --- a/en/user-guide/build-app/flow-app/nodes/parameter-extractor.mdx +++ b/en/guides/workflow/nodes/parameter-extractor.mdx @@ -1,15 +1,15 @@ --- title: Parameter Extraction -version: 'English' --- + ### 1 Definition Utilize LLM to infer and extract structured parameters from natural language for subsequent tool invocation or HTTP requests. -Dify workflows provide a rich selection of [tools](/en-us/user-guide/build-app/flow-app/nodes/tools), most of which require structured parameters as input. The parameter extractor can convert user natural language into parameters recognizable by these tools, facilitating tool invocation. +Dify workflows provide a rich selection of [tools](../../tools.md), most of which require structured parameters as input. The parameter extractor can convert user natural language into parameters recognizable by these tools, facilitating tool invocation. -Some nodes within the workflow require specific data formats as inputs, such as the [iteration](/en-us/user-guide/build-app/flow-app/nodes/iteration) node, which requires an array format. The parameter extractor can conveniently achieve structured parameter conversion. +Some nodes within the workflow require specific data formats as inputs, such as the [iteration](iteration.md#definition) node, which requires an array format. The parameter extractor can conveniently achieve [structured parameter conversion](iteration.md#example-1-long-article-iteration-generator). *** @@ -19,13 +19,13 @@ Some nodes within the workflow require specific data formats as inputs, such as In this example: The Arxiv paper retrieval tool requires **paper author** or **paper ID** as input parameters. The parameter extractor extracts the paper ID **2405.10739** from the query "What is the content of this paper: 2405.10739" and uses it as the tool parameter for precise querying. -![Arxiv Paper Retrieval Tool](/images/assets/precise-query.png) +![Arxiv Paper Retrieval Tool](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/a8bae4106a015c76ebb0a165f2409458.png) -2. **Converting text to structured data**, such as in the long story iteration generation application, where it serves as a pre-step for the [iteration node](/en-us/user-guide/build-app/flow-app/nodes/iteration), converting chapter content in text format to an array format, facilitating multi-round generation processing by the iteration node. +2. **Converting text to structured data**, such as in the long story iteration generation application, where it serves as a pre-step for the [iteration node](iteration.md), converting chapter content in text format to an array format, facilitating multi-round generation processing by the iteration node. -![](/images/assets/convert-chapter-content.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/71d8e48d842342668f92e6dd84fc03c1.png) -3. **Extracting structured data and using the** [**HTTP Request**](/en-us/user-guide/build-app/flow-app/nodes/http-request), which can request any accessible URL, suitable for obtaining external retrieval results, webhooks, generating images, and other scenarios. +1. **Extracting structured data and using the** [**HTTP Request**](https://docs.dify.ai/guides/workflow/node/http-request), which can request any accessible URL, suitable for obtaining external retrieval results, webhooks, generating images, and other scenarios. *** diff --git a/en/user-guide/build-app/flow-app/nodes/question-classifier.mdx b/en/guides/workflow/nodes/question-classifier.mdx similarity index 88% rename from en/user-guide/build-app/flow-app/nodes/question-classifier.mdx rename to en/guides/workflow/nodes/question-classifier.mdx index 4c4005da..ab8378f9 100644 --- a/en/user-guide/build-app/flow-app/nodes/question-classifier.mdx +++ b/en/guides/workflow/nodes/question-classifier.mdx @@ -1,8 +1,8 @@ --- title: Question Classifier -version: 'English' --- + ### 1. Definition By defining classification descriptions, the issue classifier can infer and match user inputs to the corresponding categories and output the classification results. @@ -17,7 +17,7 @@ In a typical product customer service Q\&A scenario, the issue classifier can se The following diagram is an example workflow template for a product customer service scenario: -![](/images/assets/question-classifier-scenarios.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/2f06ecce149c844c23be70a8fcff09bc.png) In this scenario, we set up three classification labels/descriptions: @@ -35,7 +35,7 @@ When users input different questions, the issue classifier will automatically cl ### 3. How to Configure -![](/images/assets/question-classifier-1.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/2f039c5ff3f095b0eed291101d9bff15.png) **Configuration Steps:** @@ -54,6 +54,4 @@ When users input different questions, the issue classifier will automatically cl **Output Variable**: -`class_name` - -This is the classification name output after classification. You can use the classification result variable in downstream nodes as needed. +`class_name` stores the classification output label. You can reference this classification result in downstream nodes when needed. diff --git a/en/guides/workflow/nodes/start.mdx b/en/guides/workflow/nodes/start.mdx new file mode 100644 index 00000000..9a05f46c --- /dev/null +++ b/en/guides/workflow/nodes/start.mdx @@ -0,0 +1,150 @@ +--- +title: Start +--- + +### Definition + +The **“Start”** node is a critical preset node in the Chatflow / Workflow application. It provides essential initial information, such as user input and [uploaded files](../file-upload.md), to support the normal flow of the application and subsequent workflow nodes. + +### Configuring the Node + +On the Start node's settings page, you'll find two sections: **"Input Fields"** and preset **System Variables**. + +![Chatflow and Workflow](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/c4a9bb46f636807f0b59710724fddc40.png) + +### Input Field + +Input field is configured by application developers to prompt users for additional information. + +For example, in a weekly report application, users might be required to provide background information such as name, work date range, and work details in a specific format. This preliminary information helps the LLM generate higher quality responses. + +Six types of input variables are supported, all of which can be set as required: + +* **Text:** Short text, filled in by the user, with a maximum length of 256 characters. +* **Paragraph:** Long text, allowing users to input longer content. +* **Select:** Fixed options set by the developer; users can only select from preset options and cannot input custom content. +* **Number:** Only allows numerical input. +* **Single File:** Allows users to upload a single file. Supports document types, images, audio, video, and other file types. Users can upload locally or paste a file URL. For detailed usage, refer to File Upload. +* **File List:** Allows users to batch upload files. Supports document types, images, audio, video, and other file types. Users can upload locally or paste file URLs. For detailed usage, refer to File Upload. + + +Dify's built-in document extractor node can only process certain document formats. For processing images, audio, or video files, refer to External Data Tools to set up corresponding file processing nodes. + + +Once configured, users will be guided to provide necessary information to the LLM before using the application. More information will help to improve the LLM's question-answering efficiency. + +### System Variables + +System variables are preset system-level parameters in Chatflow / Workflow applications that can be globally accessed by other nodes in the application. They are typically used in advanced development scenarios, such as building multi-turn dialogue applications, collecting application logs and monitoring data, or recording usage behavior across different applications and users. + +**Workflow** + +Workflow application provides the following system variables: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Variable NameData TypeDescriptionNotes
sys.files
[LEGACY]
Array[File]File parameter, stores images uploaded by users when initially using the applicationImage upload feature needs to be enabled in the "Features" section at the top right of the application orchestration page
sys.user_idStringUser ID, a unique identifier automatically assigned to each user when using the workflow application, used to distinguish different conversation users
sys.app_idStringApplication ID, a unique identifier assigned to each Workflow application by the system, used to distinguish different applications and record basic information of the current applicationFor users with development capabilities, this parameter can be used to differentiate and locate different Workflow applications
sys.workflow_idStringWorkflow ID, used to record all node information contained in the current Workflow applicationFor users with development capabilities, this parameter can be used to track and record node information within the Workflow
sys.workflow_run_idStringWorkflow application run ID, used to record the running status of the Workflow applicationFor users with development capabilities, this parameter can be used to track the application's run history
+ +**Chatflow** + +Chatflow application provides the following system variables: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Variable NameData TypeDescriptionNotes
sys.queryStringThe initial content input by the user in the dialogue box
sys.filesArray[File]Images uploaded by the user in the dialogue boxImage upload feature needs to be enabled in the "Features" section at the top right of the application orchestration page
sys.dialogue_countNumberThe number of dialogue turns during user interaction with the Chatflow application. Automatically increments by 1 after each turn. Can be used with if-else nodes to create rich branching logic. For example, at the Xth turn of dialogue, review the conversation history and provide analysis
sys.conversation_idStringUnique identifier for the dialogue interaction session, grouping all related messages into the same conversation, ensuring the LLM continues the dialogue on the same topic and context
sys.user_idStringUnique identifier assigned to each application user, used to distinguish different conversation users
sys.app_idStringApplication ID, a unique identifier assigned to each Workflow application by the system, used to distinguish different applications and record basic information of the current applicationFor users with development capabilities, this parameter can be used to differentiate and locate different Workflow applications
sys.workflow_idStringWorkflow ID, used to record all node information contained in the current Workflow applicationFor users with development capabilities, this parameter can be used to track and record node information within the Workflow
sys.workflow_run_idStringWorkflow application run ID, used to record the running status of the Workflow applicationFor users with development capabilities, this parameter can be used to track the application's run history
diff --git a/en/user-guide/build-app/flow-app/nodes/template.mdx b/en/guides/workflow/nodes/template.mdx similarity index 83% rename from en/user-guide/build-app/flow-app/nodes/template.mdx rename to en/guides/workflow/nodes/template.mdx index 9d4f9410..816b1d14 100644 --- a/en/user-guide/build-app/flow-app/nodes/template.mdx +++ b/en/guides/workflow/nodes/template.mdx @@ -2,9 +2,10 @@ title: Template --- + Template lets you dynamically format and combine variables from previous nodes into a single text-based output using Jinja2, a powerful templating syntax for Python. It's useful for combining data from multiple sources into a specific structure required by subsequent nodes. The simple example below shows how to assemble an article by piecing together various previous outputs: -![](/en-us/images/assets/image-(158).png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/549bd41e839ba2689d7ff286f77f7489.png) Beyond naive use cases, you can create more complex templates as per Jinja's [documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/) for a variety of tasks. Here's one template that structures retrieved chunks and their relevant metadata from a knowledge retrieval node into a formatted markdown: @@ -24,7 +25,7 @@ Beyond naive use cases, you can create more complex templates as per Jinja's [do {% endraw %} ``` -![](/en-us/images/assets/image-(159).png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/51f80c553d979e947a9749a9f820b6ab.png) This template node can then be used within a Chatflow to return intermediate outputs to the end user, before a LLM response is initiated. diff --git a/en/guides/workflow/nodes/tools.mdx b/en/guides/workflow/nodes/tools.mdx new file mode 100644 index 00000000..ba880ffa --- /dev/null +++ b/en/guides/workflow/nodes/tools.mdx @@ -0,0 +1,64 @@ +--- +title: Tools +--- + + +The workflow provides a rich selection of tools, categorized into three types: + +* **Built-in Tools**: Tools provided by Dify. +* **Custom Tools**: Tools imported or configured via the OpenAPI/Swagger standard format. +* **Workflows**: Workflows that have been published as tools. + +## Add and Use the Tool Node + +Before using built-in tools, you may need to **authorize** the tools. + +If built-in tools do not meet your needs, you can create custom tools in the **Dify menu navigation -- Tools** section. + +You can also orchestrate a more complex workflow and publish it as a tool. + +
+ + +
+ +Configuring a tool node generally involves two steps: + +1. Authorizing the tool/creating a custom tool/publishing a workflow as a tool. +2. Configuring the tool's input and parameters. + +For more information on how to create custom tools and configure them, please refer to the [Tool Configuration Guide](https://docs.dify.ai/guides/tools). + +### Advanced Features + +**Retry on Failure** + +For some exceptions that occur in the node, it is usually sufficient to retry the node again. When the error retry function is enabled, the node will automatically retry according to the preset strategy when an error occurs. You can adjust the maximum number of retries and the interval between each retry to set the retry strategy. + +- The maximum number of retries is 10 +- The maximum retry interval is 5000 ms + +![](https://assets-docs.dify.ai/2024/12/34867b2d910d74d2671cd40287200480.png) + +**Error Handling** + +Tool nodes may encounter errors during information processing that could interrupt the workflow. Developers can follow these steps to configure fail branches, enabling contingency plans when nodes encounter exceptions, avoiding workflow interruptions. + +1. Enable "Error Handling" in the tool node +2. Select and configure an error-handling strategy + +![](https://assets-docs.dify.ai/2024/12/39dc3b5881d9a5fe35b877971f70d3a6.png) + +For more information about exception handling approaches, please refer to [Error Handling](https://docs.dify.ai/guides/workflow/error-handling). + +## Publishing Workflow Applications as Tools + +Workflow applications can be published as tools and used by nodes in other workflows. For information about creating custom tools and tool configuration, please refer to the [Tool Configuration Guide](https://docs.dify.ai/guides/tools). diff --git a/en/user-guide/build-app/flow-app/nodes/variable-aggregator.mdx b/en/guides/workflow/nodes/variable-aggregator.mdx similarity index 78% rename from en/user-guide/build-app/flow-app/nodes/variable-aggregator.mdx rename to en/guides/workflow/nodes/variable-aggregator.mdx index 1356001c..2d02fff3 100644 --- a/en/user-guide/build-app/flow-app/nodes/variable-aggregator.mdx +++ b/en/guides/workflow/nodes/variable-aggregator.mdx @@ -2,6 +2,7 @@ title: Variable Aggregator --- + ### 1 Definition Aggregate variables from multiple branches into a single variable to achieve unified configuration for downstream nodes. @@ -18,15 +19,15 @@ Through variable aggregation, you can aggregate multiple outputs, such as from i Without variable aggregation, the branches of Classification 1 and Classification 2, after different knowledge base retrievals, would require repeated definitions for downstream LLM and direct response nodes. -![Issue Classification (without Variable Aggregation)](/en-us/images/assets/image-(227).png) +![Issue Classification (without Variable Aggregation)](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/7a7c91663c3799ce9d056b013d5df29c.png) By adding variable aggregation, the outputs of the two knowledge retrieval nodes can be aggregated into a single variable. -![Multi-Branch Aggregation after Issue Classification](/images/assets/variable-aggregation.png) +![Multi-Branch Aggregation after Issue Classification](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/2b1694936fdab4843f5edc3f2fd1e79a.png) **Multi-Branch Aggregation after IF/ELSE Conditional Branching** -![Multi-Branch Aggregation after Conditional Branching](/images/assets/if-else-conditional.png) +![Multi-Branch Aggregation after Conditional Branching](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/ff0e5774a3eccc8a04c310ab9bae25e7.png) ### 3 Format Requirements @@ -36,4 +37,6 @@ The variable aggregator supports aggregating various data types, including strin **Aggregation Grouping** +Starting from version v0.6.10, aggregation grouping is supported. + When aggregation grouping is enabled, the variable aggregator can aggregate multiple groups of variables, with each group requiring the same data type for aggregation. diff --git a/en/user-guide/build-app/flow-app/nodes/variable-assigner.mdx b/en/guides/workflow/nodes/variable-assigner.mdx similarity index 68% rename from en/user-guide/build-app/flow-app/nodes/variable-assigner.mdx rename to en/guides/workflow/nodes/variable-assigner.mdx index 05a35e12..1004c680 100644 --- a/en/user-guide/build-app/flow-app/nodes/variable-assigner.mdx +++ b/en/guides/workflow/nodes/variable-assigner.mdx @@ -2,25 +2,22 @@ title: Variable Assigner --- + ### Definition The variable assigner node is used to assign values to writable variables. Currently supported writable variables include: -* [Conversation variables](/en-us/user-guide/build-app/flow-app/concepts#variables). +* [conversation variables](https://docs.dify.ai/guides/workflow/key-concepts#conversation-variables). Usage: Through the variable assigner node, you can assign workflow variables to conversation variables for temporary storage, which can be continuously referenced in subsequent conversations. -![](/images/assets/variable-assigner.png) +![](https://assets-docs.dify.ai/2024/11/83d0b9ef4c1fad947b124398d472d656.png) *** ### Usage Scenario Examples -Using the variable assigner node, you can write context from the conversation process, files uploaded to the dialog box (coming soon), and user preference information into conversation variables. These stored variables can then be referenced in subsequent conversations to direct different processing flows or formulate responses. - -**Scenario 1** - -You can write the **context during the conversation, the file uploaded to the chatting box (coming soon), the preference information entered by the user,etc.** into the conversation variable using **Variable Assigner** node. These stored information can be referenced in subsequent chats to guide different processing flows or provide responses. +Using the variable assigner node, you can write context from the conversation process, files uploaded to the dialog box, and user preference information into conversation variables. These stored variables can then be referenced in subsequent conversations to direct different processing flows or formulate responses. **Scenario 1** @@ -28,7 +25,7 @@ You can write the **context during the conversation, the file uploaded to the ch Example: After the conversation starts, LLM will automatically determine whether the user's input contains facts, preferences, or chat history that need to be remembered. If it has, LLM will first extract and store those information, then use it as context to respond. If there is no new information to remember, LLM will directly use the previously relevant memories to answer questions. -![](/images/assets/conversation-variables-scenario-1.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/8d0492814b1515f50e87b2900ff400db.png) **Configuration process:** @@ -114,7 +111,7 @@ def main(arg1: list) -> str: Example: Before the chatting, the user specifies "English" in the `language` input box. This language will be written to the conversation variable, and the LLM will reference this information when responding, continuing to use "English" in subsequent conversations. -![](/images/assets/conversation-var-scenario-1.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/1867d608a7d009431b73377ed65b427b.png) **Configuration Guide:** @@ -130,7 +127,7 @@ Example: Before the chatting, the user specifies "English" in the `language` inp Example: After starting the conversation, the LLM will ask the user to input items related to the Checklist in the chatting box. Once the user mentions content from the Checklist, it will be updated and stored in the Conversation Variable. The LLM will remind the user to continue supplementing missing items after each round of dialogue. -![](/images/assets/conversation-var-scenario-2-1.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/c4362b01298b12e7d6fcd9e798f3165a.png) **Configuration Process:** @@ -142,20 +139,45 @@ Example: After starting the conversation, the LLM will ask the user to input ite ### Using the Variable Assigner Node -Click the + sign on the right side of the node, select the "variable assigner" node, and fill in "Assigned Variable" and "Set Variable". +Click the `+` icon on the right side of the node and select the **“Variable Assignment”** node. Configure the target variables and their corresponding source variables. This node allows you to assign values to multiple variables simultaneously. -![](/images/assets/language-variable-assigner.png) +![](https://assets-docs.dify.ai/2024/11/ee15dee864107ba5a93b459ebdfc32cf.png) **Setting Variables:** -Assigned Variable: Select the variable to be assigned, i.e., specify the target conversation variable that needs to be assigned. +Variable: Select the variable to be assigned, i.e., specify the target conversation variable that needs to be assigned. Set Variable: Select the variable to assign, i.e., specify the source variable that needs to be converted. -Taking the assignment logic in the above figure as an example: Assign the text output item `Language Recognition/text` from the previous node to the conversation variable `language`. +The variable assignment logic illustrated in the image above assigns the user’s language preference, specified on the initial page `Start/language`, to the system-level conversation variable `language`. -**Write Mode:** +### **Operation Modes for Specifying Variables** + +The data type of the target variable determines its operation method. Below are the operation modes for different variable types: + +1. Target variable data type: `String` + + • **Overwrite**: Directly overwrite the target variable with the source variable. + • **Clear**: Clear the contents of the selected target variable. + • **Set**: Manually assign a value without requiring a source variable. + +2. Target variable data type: `Number` + + • **Overwrite**: Directly overwrite the target variable with the source variable. + • **Clear**: Clear the contents of the selected target variable. + • **Set**: Manually assign a value without requiring a source variable. + • **Arithmetic**: Perform addition, subtraction, multiplication, or division on the target variable. + +3. Target variable data type: `Object` + + • **Overwrite**: Directly overwrite the target variable with the source variable. + • **Clear**: Clear the contents of the selected target variable. + • **Set**: Manually assign a value without requiring a source variable. + +4. Target variable data type: `Array` + + • **Overwrite**: Directly overwrite the target variable with the source variable. + • **Clear**: Clear the contents of the selected target variable. + • **Append**: Add a new element to the array in the target variable. + • **Extend**: Add a new array to the target variable, effectively adding multiple elements at once. -* Overwrite: Overwrite the content of the source variable to the target conversation variable -* Append: When the specified variable is of Array type -* Clear: Clear the content in the target conversation variable diff --git a/en/user-guide/build-app/flow-app/orchestrate-node.mdx b/en/guides/workflow/orchestrate-node.mdx similarity index 76% rename from en/user-guide/build-app/flow-app/orchestrate-node.mdx rename to en/guides/workflow/orchestrate-node.mdx index 20674067..4a611f8b 100644 --- a/en/user-guide/build-app/flow-app/orchestrate-node.mdx +++ b/en/guides/workflow/orchestrate-node.mdx @@ -1,11 +1,11 @@ --- title: Orchestrate Node -version: 'English' --- + Both Chatflow and Workflow applications support node orchestration through visual drag-and-drop, with two orchestration design patterns: serial and parallel. -![](/images/assets/orchestrate-node.jpeg) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/3984e13db72e2bd19870f5764ec000cf.jpeg) ## Serial Node Design Pattern @@ -19,13 +19,13 @@ Consider a "Novel Generation" Workflow App implementing serial pattern: after th 2. Sequentially link the nodes. 3. Converge all paths to the "End" node to finalize the workflow. -![](/images/assets/orchestrate-node-serial-design.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/e8e884e146994b5f95cb16ec31cdd81b.png) ### Viewing Serial Structure Application Logs In a serial structure application, logs display node operations sequentially. Click "View Logs - Tracing" in the upper right corner of the dialog box to see the complete workflow process, including input/output, token consumption, and runtime for each node. -![](/images/assets/viewing-serial-structure-app-logs.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/1707ee3651f154fcb90c882a2aeab6e9.png) ## Designing Parallel Structure @@ -39,19 +39,19 @@ The following four methods demonstrate how to create a parallel structure throug **Method 1** Hover over a node to reveal the `+` button. Click it to add multiple nodes, automatically forming a parallel structure. -![](/images/assets/orchestrate-node-parallel-design-method-1.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/b93ff4b81f2a5526a8787aa1e9fb314d.png) **Method 2** Extend a connection from a node by dragging its `+` button, creating a parallel structure. -![](/images/assets/orchestrate-node-parallel-design-method-2.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/8deebdb38e3848966ed667e6ed97bdce.png) **Method 3** With multiple nodes on the canvas, visually drag and link them to form a parallel structure. -![](/images/assets/orchestrate-node-parallel-design-method-3.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/3997bca0f5efa1a3c4a214dbe3ed1f0c.png) **Method 4** In addition to canvas-based methods, you can generate parallel structures by adding nodes through the "Next Step" section in a node's right-side panel. This approach automatically creates the parallel configuration. -![](/en-us/img/orchestrate-node-parallel-design-method-4.png) +![](../../../img/orchestrate-node-parallel-design-method-4.jpeg) **Notes:** @@ -60,7 +60,7 @@ The following four methods demonstrate how to create a parallel structure throug * Chatflow applications support multiple "answer" nodes. Each parallel structure in these applications must terminate with an "answer" node to ensure proper output of content; * All parallel structures will run simultaneously; nodes within the parallel structure output results after completing their tasks, with no order relationship in output. The simpler the parallel structure, the faster the output of results. -![](/images/assets/orchestrate-node-chatflow-multi-answer.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/1d0884e4c9bfa548d84849871719d646.png) ### Designing Parallel Structure Patterns @@ -72,7 +72,7 @@ Normal parallel refers to the `Start | Parallel Nodes | End three-layer` relatio The upper limit for parallel branches is 10. -![](/images/assets/orchestrate-node-simple-parallel.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/5ba85864454880561ec95a37db382f20.png) #### 2. Nested Parallel @@ -80,22 +80,22 @@ Nested parallel refers to the Start | Multiple Parallel Structures | End multi-l A workflow supports up to 3 layers of nesting relationships. -![](/images/assets/orchestrate-node-nested-parallel.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/036f9fcfb1d0f8dedbd34e90ebb64c29.png) #### 3. Conditional Branch + Parallel Parallel structures can also be used in conjunction with conditional branches. -![](/images/assets/orchestrate-node-conditional-branch-parallel.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/d28637a39327032fa333fd49b9dd2e73.png) #### 4. Iteration Branch + Parallel This pattern integrates parallel structures within iteration branches, optimizing the execution efficiency of repetitive tasks. -![](/images/assets/orchestrate-node-iteration-parallel.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/bc06917031cf52c2e3d8cf9fe8a8dc8b.png) ### Viewing Parallel Structure Application Logs Applications with parallel structures generate logs in a tree-like format. Collapsible parallel node groups facilitate easier viewing of individual node logs. -![](/images/assets/orchestrate-node-parallel-logs.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/ad6acb838b58d2e0c8f99669b24aa20d.png) diff --git a/en/user-guide/build-app/flow-app/application-publishing.mdx b/en/guides/workflow/publish.mdx similarity index 62% rename from en/user-guide/build-app/flow-app/application-publishing.mdx rename to en/guides/workflow/publish.mdx index ae1142fb..b6fcb63a 100644 --- a/en/user-guide/build-app/flow-app/application-publishing.mdx +++ b/en/guides/workflow/publish.mdx @@ -2,9 +2,10 @@ title: Application Publishing --- + After completing debugging, clicking "Publish" in the upper right corner allows you to save and quickly release the workflow as different types of applications. -![](https://r2.xmsex.net/2025/03/6cd7d2105cb5a9e4f25601efbda4ffb0.png) +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/ea40850e9b8cc216b540362a7425ac5c.png) Conversational applications can be published as: @@ -19,5 +20,5 @@ Workflow applications can be published as: * Access API Reference -To manage multiple versions of Chatflow/Workflow, see [Version Control](../../../management/version-control). +To manage multiple versions of chatflow/workflow, see [Version Control](https://docs.dify.ai/guides/management/version-control). \ No newline at end of file diff --git a/en/guides/workflow/shortcut-key.mdx b/en/guides/workflow/shortcut-key.mdx new file mode 100644 index 00000000..27552d06 --- /dev/null +++ b/en/guides/workflow/shortcut-key.mdx @@ -0,0 +1,98 @@ +--- +title: Shortcut Key +--- + + +The Chatflow / Workflow application orchestration page supports the following shortcut keys to help you improve the efficiency of orchestration nodes. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
WindowsmacOSExplanation
Ctrl + CCommand + CCopy nodes
Ctrl + VCommand + VPaste nodes
Ctrl + DCommand + DDuplicate nodes
Ctrl + OCommand + OOrganize nodes
Ctrl + ZCommand + ZUndo
Ctrl + YCommand + YRedo
Ctrl + Shift + ZCommand + Shift + ZRedo
Ctrl + 1Command + 1Canvas fits view
Ctrl + (-)Command + (-)Canvas zooms out
Ctrl + (=)Command + (=)Canvas zooms in
Shift + 1Shift + 1Resets canvas view to 100%
Shift + 5Shift + 5Scales canvas to 50%
HHCanvas toggles to Hand mode
VVCanvas toggles to Pointer mode
Delete/BackspaceDelete/BackspaceDelete selected nodes
Alt + ROption + RWorkflow starts to run
diff --git a/en/guides/workflow/variables.mdx b/en/guides/workflow/variables.mdx new file mode 100644 index 00000000..5003c0dd --- /dev/null +++ b/en/guides/workflow/variables.mdx @@ -0,0 +1,137 @@ +--- +title: Variables +description: Last edited by Allen, Dify Technical Writer +--- + +**Workflow** and **Chatflow** Application are composed of independent nodes. Most nodes have input and output items, but the input and output information for each node is not consistent and dynamic. + +**How to use a fixed symbol to refer dynamically changing content?** Variables, as dynamic data containers, can store and transmit unfixed content, being referenced mutually within different nodes, providing flexible information mobility between nodes. + +### System Variables + +System variables refer to pre-set system-level parameters within Chatflow / Workflow App that can be globally read by other nodes. All system-level variables begin with `sys.` + +#### Workflow + +Workflow type application provides the system variables below: + +
Variables nameData TypeDescriptionRemark

sys.files

[LEGACY]

Array[File]File Parameter: Stores images uploaded by usersThe image upload function needs to be enabled in the 'Features' section in the upper right corner of the application orchestration page
sys.user_idStringUser ID: A unique identifier automatically assigned by the system to each user when they use a workflow application. It is used to distinguish different users
sys.app_idStringApp ID: A unique identifier automatically assigned by the system to each App. This parameter is used to record the basic information of the current application. This parameter is used to differentiate and locate distinct Workflow applications for users with development capabilities
sys.workflow_idStringWorkflow ID: This parameter records information about all nodes information in the current Workflow application.This parameter can be used by users with development capabilities to track and record information about the nodes contained within a Workflow
sys.workflow_run_idStringWorkflow Run ID: Used to record the runtime status and execution logs of a Workflow application.This parameter can be used by users with development capabilities to track the application's historical execution records
+ +![Workflow App System Variables](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/09d9f977965d41cf056f37a4f8f952db.png) + +#### Chatflow + +Chatflow type application provides the following system variables: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Variables nameData TypeDescriptionRemark
sys.queryStringContent entered by the user in the chatting box.
sys.filesArray[File]File Parameter: Stores images uploaded by usersThe image upload function needs to be enabled in the 'Features' section in the upper right corner of the application orchestration page
sys.dialogue_countNumberThe number of conversations turns during the user's interaction with a Chatflow application. The count automatically increases by one after each chat round and can be combined with if-else nodes to create rich branching logic.

For example, LLM will review the conversation history at the X conversation turn and automatically provide an analysis.
sys.conversation_idStringA unique ID for the chatting box interaction session, grouping all related messages into the same conversation, ensuring that the LLM continues the chatting on the same topic and context.
sys.user_idStringA unique ID is assigned for each application user to distinguish different conversation users.
sys.workflow_idStringWorkflow ID: This parameter records information about all nodes information in the current Workflow application.This parameter can be used by users with development capabilities to track and record information about the nodes contained within a Workflow
sys.workflow_run_idStringWorkflow Run ID: Used to record the runtime status and execution logs of a Workflow application.This parameter can be used by users with development capabilities to track the application's historical execution records
+ +![Chatflow App System Variables](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/79340d58bf2c202dd6bd4e93fa03272d.png) + +### Environment Variables + +**Environment variables are used to protect sensitive information involved in workflows**, such as API keys and database passwords used when running workflows. They are stored in the workflow rather than in the code, allowing them to be shared across different environments. + +![Environment Variables](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/475c2b9a04b8c9ab16b7d3dba21755f3.png) + +Supports the following 3 data types: + +* String +* Number +* Secret + +Environmental variables have the following characteristics: + +* Environment variables can be globally referenced within most nodes; +* Environment variable names cannot be duplicated; +* Output variables of nodes are generally read-only and cannot be written to. + +*** + +### Conversation Variables + +> Conversation variables are only applicable to [Chatflow](variables.md#chatflow-and-workflow) App. + +**Conversation variables allow application developers to specify particular information that needs to be temporarily stored within the same Chatflow session, ensuring that this information can be referenced across multiple rounds of chatting within the current chatflow**. This can include context, files uploaded to the chatting box(coming soon), user preferences input during the conversation, etc. It's like providing a "memo" for the LLM that can be checked at any time, avoiding information bias caused by LLM memory errors. + +For example, you can store the language preference input by the user in the first round of chatting in a conversation variable. The LLM will refer to the information in the conversation variable when answering and use the specified language to reply to the user in subsequent chats. + +![Conversation Variable](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/c04285fec92f13a20ccbd3e21361a30d.png) + +**Conversation variables** support the following six data types: + +* String +* Number +* Object +* Array\[string] +* Array\[number] +* Array\[object] + +**Conversation variables** have the following features: + +* Conversation variables can be referenced globally within most nodes in the same Chatflow App; +* Writing to conversation variables requires using the [Variable Assigner](https://docs.dify.ai/guides/workflow/node/variable-assignment) node; +* Conversation variables are read-write variables; + +About how to use conversation variables with the Variable Assigner node, please refer to the [Variable Assigner](node/variable-assignment.md). + +To track changes in conversation variable values during debugging the application, click the conversation variable icon at the top of the Chatflow application preview page. + +![](https://assets-docs.dify.ai/2024/11/cc8067fa4c96436f037f8210ebe3f65c.png) + +### Notice + +* To avoid variable name duplication, node naming must not be repeated +* The output variables of nodes are generally fixed variables and cannot be edited diff --git a/en/guides/workspace/README.mdx b/en/guides/workspace/README.mdx new file mode 100644 index 00000000..61f6262d --- /dev/null +++ b/en/guides/workspace/README.mdx @@ -0,0 +1,18 @@ +--- +title: Collaboration +--- + + +Dify is a multi-user platform where workspaces are the basic units of team collaboration. Members of a workspace can create and edit applications and knowledge bases, and can also directly use public applications created by other team members in the [Discover](app.md) area. + +### Login Methods + +It is important to note that the login methods supported by Dify's cloud service and community edition differ, as shown in the table below. + +
Community version & Dify PremiumDify CloudEnterprise version
Email LoginSupportedNot SupportedSupported
GitHub LoginNot SupportedSupported-
Google LoginNot SupportedSupported-
SSO LoginNot SupportedNot SupportedSupported
+ +### Creating an Account + +If you are using the cloud service, a workspace will be automatically created for you upon your first login, and you will become the administrator. + +In the community version, you will be prompted to set an administrator email and password during installation. The community edition does not support the creation of multiple workspaces. diff --git a/en/guides/workspace/app.mdx b/en/guides/workspace/app.mdx new file mode 100644 index 00000000..0b210e21 --- /dev/null +++ b/en/guides/workspace/app.mdx @@ -0,0 +1,22 @@ +--- +title: Discover +--- + + +## Template Applications + +In the **Discover** section, several commonly used template applications are provided. These applications cover areas such as human resources, assistants, translation, programming, and writing. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workspace/e26314942a21cfafaaf576b6a0b723e2.png) + +To use a template application, click the "Add to Workspace" button on the template. You can then use the application in the workspace on the left side. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workspace/5ce9a17b6b811d010426c8fe3e0646ab.png) + +To modify a template and create a new application, click the "Customize" button on the template. + +## Workspace + +The workspace serves as the navigation for applications. Click on an application within the workspace to use it directly. + +Applications in the workspace include your own applications as well as those added to the workspace by other team members. diff --git a/en/guides/workspace/app/README.mdx b/en/guides/workspace/app/README.mdx new file mode 100644 index 00000000..25f2326b --- /dev/null +++ b/en/guides/workspace/app/README.mdx @@ -0,0 +1,22 @@ +--- +title: Discover +--- + + +## Template Applications + +In the **Discover** section, several commonly used template applications are provided. These applications cover areas such as human resources, assistants, translation, programming, and writing. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workspace/app/e26314942a21cfafaaf576b6a0b723e2.png) + +To use a template application, click the "Add to Workspace" button on the template. You can then use the application in the workspace on the left side. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workspace/app/5ce9a17b6b811d010426c8fe3e0646ab.png) + +To modify a template and create a new application, click the "Customize" button on the template. + +## Workspace + +The workspace serves as the navigation for applications. Click on an application within the workspace to use it directly. + +Applications in the workspace include your own applications as well as those added to the workspace by other team members. \ No newline at end of file diff --git a/en/guides/workspace/billing.mdx b/en/guides/workspace/billing.mdx new file mode 100644 index 00000000..1ffe835c --- /dev/null +++ b/en/guides/workspace/billing.mdx @@ -0,0 +1,56 @@ +--- +description: Know more about Dify's billing plans to support expanding your usage. +--- + +--- +title: Billing +--- + + +## Workspace-based Billing + +The Dify platform has "workspaces" and "apps". A workspace can contain multiple apps. Each app has capabilities like prompt orchestration, LLM invocation, knowledge RAG, logging & annotation, and standard API delivery. **We recommend one team or organization use one workspace, because our system bills on a per-workspace basis (calculated from total resource consumption within a workspace)**. For example: + +```Plaintext + +Workspace 1 +App 1(Prompt, RAG, LLM, Knowledge base, Logging & Annotation, API) +App 2(Prompt, RAG, LLM, Knowledge base, Logging & Annotation, API) +App 3(Prompt, RAG, LLM, Knowledge base, Logging & Annotation, API) +... +Workspace 2 +``` + +## Plan Quotas and Features + +We offer a free plan for all users to test your AI app ideas, including 200 OpenAI model message calls. After using up the free allowance, you need to obtain LLM API keys from different model providers, and add them under **Settings --> Model Providers** to enable normal model capabilities.Upgrading your workspace to a paid plan means unlocking paid resources for that workspace. For example: upgrading to Professional allows creating over 10 apps (up to 50) with up to 200MB total vector storage quota combined across projects in that workspace. Different version quotas and features are as follows: + +
MetricSandboxProfessional Team
pricingFree$59/month$159/month
Model ProvidersOpenAI,Anthropic,Azure OpenAI,Llama2,Hugging Face,ReplicateOpenAI,Anthropic,Azure OpenAI, Llama2,Hugging Face,ReplicateOpenAI,Anthropic,Azure OpenAI, Llama2,Hugging Face,Replicate
Team Members13Unlimited
Apps1050Unlimited
Vector Storage5MB200MB1GB
Document Processing PriorityStandardPriorityPriority
Logo Change//
Message Requests500 per dayUnlimitedUnlimited
RAG API Requests Quota Limits/√ Coming soon√ Coming soon
Annotation Quota Limits1020005000
Agent Model/√ Coming soon√ Coming soon
Logs History30 daysUnlimitedUnlimited
+ +Check out the [pricing page ](https://dify.ai/pricing)to learn more. + +> **Vector storage:**Vector storage is used to store knowledge bases as vectors for LLMs to understand. Each 1MB can store about 1.2million characters of vectorized data(estimated using OpenAI Embeddings, varies across models). How much the data shrinks depends on complexity and repetition in the content. +> +> **Annotation Quota Limits:**Manual editing and annotation of responses provides customizable high-quality question-answering abilities for apps. +> +> **Message Requests:**The number of times the Dify API is called daily during application sessions (rather than LLM API resource usage). It includes all messages generated from your applications via API calls or during WebApp sessions. **Note:Daily quotas are refreshed at midnight Pacific Standard Time.** +> +> **RAG API Requests:**Refers to the number of API calls invoking only the knowledge base processing capabilities of Dify. + +## Monitor Resource Usage + +You can view capacity usage details on your workspace's Billing page. + +![monitor resource usage](../.gitbook/assets/usage.png) + +## FAQ + +1. What happens if my resource usage exceeds the Free plan before I upgrade to a paid plan? + + > During Dify's Beta stage, excess quotas were provided for free to seed users' workspaces. After Dify's billing system goes live, your existing data will not be lost, but your workspace can no longer process additional text vectorization storage. You will need to upgrade to a suitable plan to continue using Dify. +2. What if neither the Professional nor Team plans meet my usage needs? + + > If you are a large enterprise requiring more advanced plans, please email us at [business@dify.ai](mailto:business@dify.ai). +3. Under what circumstances do I need to pay when using the CE version? + + > When using the CE version, please follow our open source license terms. If you need commercial use, such as removing Dify's logo or requiring multiple workspaces, using Dify in a SaaS model, you will need to contact us at [business@dify.ai](mailto:business@dify.ai) for commercial licensing. diff --git a/en/guides/workspace/explore.mdx b/en/guides/workspace/explore.mdx new file mode 100644 index 00000000..bae5e031 --- /dev/null +++ b/en/guides/workspace/explore.mdx @@ -0,0 +1,24 @@ +--- +title: Discovery +--- + + +## Template application + +In **Explore > Discovery**, some commonly used template applications are provided. These apps cover translate, writing, programming and assistant. + +![](../explore/images/explore-app.jpg) + +If you want to use a template application, click the template's "Add to Workspace" button. In the workspace on the left, the app is available. + +![](../explore/images/creat-customize-app.jpg) + +If you want to modify a template to create a new application, click the "Customize" button of the template. + +## Workspace + +The workspace is the application's navigation. Click an application in the workspace to use the application directly. + +![](../explore/images/workspace.jpg) + +Apps in the workspace include: your own apps and apps added to the workspace by other teams. diff --git a/en/guides/workspace/invite-and-manage-members.mdx b/en/guides/workspace/invite-and-manage-members.mdx new file mode 100644 index 00000000..ae817081 --- /dev/null +++ b/en/guides/workspace/invite-and-manage-members.mdx @@ -0,0 +1,16 @@ +--- +title: Inviting and Managing Members +--- + + +Members of a workspace can be invited and managed by the owner and administrators. After logging in, go to the settings under the user avatar dropdown in Dify, and open the member management interface from the left side of that screen. + +### Inviting Members + +Provide the email of the invitee. The system will immediately grant the invitee access to the workspace, and the invitee will also receive an email notification. + +The system will automatically create a Dify account for the new member. + +### Removing Members + +Once a member is removed from the team, they will no longer have access to the current workspace. However, this will not affect their access to other workspaces they have already joined. \ No newline at end of file diff --git a/en/learn-more/extended-reading/README.mdx b/en/learn-more/extended-reading/README.mdx new file mode 100644 index 00000000..29a3c307 --- /dev/null +++ b/en/learn-more/extended-reading/README.mdx @@ -0,0 +1,4 @@ +--- +title: Under Maintenance +--- + diff --git a/en/learn-more/extended-reading/how-to-use-json-schema-in-dify.mdx b/en/learn-more/extended-reading/how-to-use-json-schema-in-dify.mdx new file mode 100644 index 00000000..c604d7b8 --- /dev/null +++ b/en/learn-more/extended-reading/how-to-use-json-schema-in-dify.mdx @@ -0,0 +1,225 @@ +--- +title: How to Use JSON Schema Output in Dify? +--- + + +JSON Schema is a specification for describing JSON data structures. Developers can define JSON Schema structures to specify that LLM outputs strictly adhere to the defined data or content, such as generating clear document or code structures. + +## Models Supporting JSON Schema Functionality + +* `gpt-4o-mini-2024-07-18` and later versions +* `gpt-4o-2024-08-06` and later versions + +> For more information on the structured output capabilities of OpenAI series models, please refer to [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs/introduction). + +## Usage of Structured Outputs + +1. Connect the LLM to tools, functions, data, and other components within the system. Set `strict: true` in the function definition. When enabled, the Structured Outputs feature ensures that the parameters generated by the LLM for function calls precisely match the JSON schema you provided in the function definition. +2. When the LLM responds to users, it outputs content in a structured format according to the definitions in the JSON Schema. + +## Enabling JSON Schema in Dify + +Switch the LLM in your application to one of the models supporting JSON Schema output mentioned above. Then, in the settings form, enable `JSON Schema` and fill in the JSON Schema template. Simultaneously, enable the `response_format` column and switch it to the `json_schema` format. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/899b0a46e438e50bef731fa45d3e7837.png) + +The content generated by the LLM supports output in the following format: + +* **Text:** Output in text format + +## Defining JSON Schema Templates + +You can refer to the following JSON Schema format to define your template content: + +```json +{ + "name": "template_schema", + "description": "A generic template for JSON Schema", + "strict": true, + "schema": { + "type": "object", + "properties": { + "field1": { + "type": "string", + "description": "Description of field1" + }, + "field2": { + "type": "number", + "description": "Description of field2" + }, + "field3": { + "type": "array", + "description": "Description of field3", + "items": { + "type": "string" + } + }, + "field4": { + "type": "object", + "description": "Description of field4", + "properties": { + "subfield1": { + "type": "string", + "description": "Description of subfield1" + } + }, + "required": ["subfield1"], + "additionalProperties": false + } + }, + "required": ["field1", "field2", "field3", "field4"], + "additionalProperties": false + } +} +``` + +Step-by-step guide: + +1. Define basic information: + * Set `name`: Choose a descriptive name for your schema. + * Add `description`: Briefly explain the purpose of the schema. + * Set `strict`: true to ensure strict mode. +2. Create the `schema` object: + * Set `type: "object"` to specify the root level as an object type. + * Add a `properties` object to define all fields. +3. Define fields: + * Create an object for each field, including `type` and `description`. + * Common types: `string`, `number`, `boolean`, `array`, `object`. + * For arrays, use `items` to define element types. + * For objects, recursively define `properties`. +4. Set constraints: + * Add a `required` array at each level, listing all required fields. + * Set `additionalProperties: false` at each object level. +5. Handle special fields: + * Use `enum` to restrict optional values. + * Use `$ref` to implement recursive structures. + +## Example + +### 1. Chain of thought(routine) + +**JSON Schema Example** + +```json +{ + "name": "math_reasoning", + "description": "Records steps and final answer for mathematical reasoning", + "strict": true, + "schema": { + "type": "object", + "properties": { + "steps": { + "type": "array", + "description": "Array of reasoning steps", + "items": { + "type": "object", + "properties": { + "explanation": { + "type": "string", + "description": "Explanation of the reasoning step" + }, + "output": { + "type": "string", + "description": "Output of the reasoning step" + } + }, + "required": ["explanation", "output"], + "additionalProperties": false + } + }, + "final_answer": { + "type": "string", + "description": "The final answer to the mathematical problem" + } + }, + "additionalProperties": false, + "required": ["steps", "final_answer"] + } +} +``` + +**Prompts** + +``` +You are a helpful math tutor. You will be provided with a math problem, +and your goal will be to output a step by step solution, along with a final answer. +For each step, just provide the output as an equation use the explanation field to detail the reasoning. +``` + +### UI generation(root recursion mode) + +**JSON Schema Example** + +```json +{ + "name": "ui", + "description": "Dynamically generated UI", + "strict": true, + "schema": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "The type of the UI component", + "enum": ["div", "button", "header", "section", "field", "form"] + }, + "label": { + "type": "string", + "description": "The label of the UI component, used for buttons or form fields" + }, + "children": { + "type": "array", + "description": "Nested UI components", + "items": { + "$ref": "#" + } + }, + "attributes": { + "type": "array", + "description": "Arbitrary attributes for the UI component, suitable for any element", + "items": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "The name of the attribute, for example onClick or className" + }, + "value": { + "type": "string", + "description": "The value of the attribute" + } + }, + "additionalProperties": false, + "required": ["name", "value"] + } + } + }, + "required": ["type", "label", "children", "attributes"], + "additionalProperties": false + } + } +``` + +**Prompts** + +``` +You are a UI generator AI. Convert the user input into a UI. +``` + +**Example Output:** + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/03c1132f714cbd5920f5b0f80287a07d.png) + +## Tips + +* Ensure that the application prompt includes instructions on how to handle cases where user input cannot produce a valid response. +* The model will always attempt to follow the provided schema. If the input content is completely unrelated to the specified schema, it may cause the LLM to generate hallucinations. +* If the LLM detects that the input is incompatible with the task, you can include language in the prompt to specify returning empty parameters or specific sentences. +* All fields must be `required`, For details, please refer to [Supported Schemas](https://platform.openai.com/docs/guides/structured-outputs/supported-schemas). +* [additionalProperties: false](https://platform.openai.com/docs/guides/structured-outputs/additionalproperties-false-must-always-be-set-in-objects) must always be set in objects. +* The root level object of the schema must be an object. + +## Appendix + +* [Introduction to Structured Outputs](https://cookbook.openai.com/examples/structured\_outputs\_intro) +* [Structured Output](https://platform.openai.com/docs/guides/structured-outputs/json-mode?context=without\_parse) diff --git a/en/learn-more/extended-reading/retrieval-augment/README.mdx b/en/learn-more/extended-reading/retrieval-augment/README.mdx new file mode 100644 index 00000000..9fb42b07 --- /dev/null +++ b/en/learn-more/extended-reading/retrieval-augment/README.mdx @@ -0,0 +1,24 @@ +--- +title: Retrieval-Augmented Generation (RAG) +--- + + +### Explanation of RAG Concept + +The RAG architecture, with vector retrieval at its core, has become the mainstream technical framework for enabling large models to access the latest external knowledge while addressing the problem of hallucinations in generated content. This technology has been implemented in a variety of application scenarios. + +Developers can use this technology to build AI-powered customer service, enterprise knowledge bases, AI search engines, and more at a low cost. By using natural language input to interact with various forms of knowledge organization, they can create intelligent systems. Let's take a representative RAG application as an example: + +In the diagram below, when a user asks, "Who is the President of the United States?", the system does not directly pass the question to the large model for an answer. Instead, it first performs a vector search in a knowledge base (such as Wikipedia shown in the diagram) to find relevant content through semantic similarity matching (e.g., "Joe Biden is the 46th and current president of the United States..."). Then, the system provides the user's question along with the retrieved relevant knowledge to the large model, allowing it to obtain sufficient information to answer the question reliably. + +![Basic RAG Architecture](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/19b393464a4d0374498144502f024516.png) + +**Why is this necessary?** + +We can think of a large model as a super expert who is familiar with various fields of human knowledge. However, it has its limitations. For instance, it does not know personal information about you because such information is private and not publicly available on the internet, so it has no prior learning opportunity. + +When you want to hire this super expert as your personal financial advisor, you need to allow them to review your investment records, household expenses, and other data before answering your questions. This way, the expert can provide professional advice based on your personal circumstances. + +**This is exactly what the RAG system does: it helps the large model temporarily acquire external knowledge it does not possess, allowing it to find answers before responding to questions.** + +From the example above, it is easy to see that the most critical part of the RAG system is the retrieval of external knowledge. Whether the expert can provide professional financial advice depends on whether they can accurately find the necessary information. If they find your weight loss plan instead of your investment records, even the most knowledgeable expert would be powerless. diff --git a/en/learn-more/extended-reading/retrieval-augment/hybrid-search.mdx b/en/learn-more/extended-reading/retrieval-augment/hybrid-search.mdx new file mode 100644 index 00000000..6366cc46 --- /dev/null +++ b/en/learn-more/extended-reading/retrieval-augment/hybrid-search.mdx @@ -0,0 +1,89 @@ +--- +title: Hybrid Search +--- + + +### Why is Hybrid Search Needed? + +The mainstream method in the retrieval phase of RAG (Retrieval-Augmented Generation) is vector search, which matches based on semantic relevance. The technical principle involves splitting the documents in the external knowledge base into semantically complete paragraphs or sentences, converting them into a series of numbers (multi-dimensional vectors) that the computer can understand, and performing the same conversion on the user's query. + +The computer can detect subtle semantic relationships between the user's query and the sentences. For example, "cats chase mice" and "kittens hunt mice" will have a higher semantic relevance than "cats chase mice" and "I like eating ham." After finding the most relevant text content, the RAG system provides it as context for the user's query to the large model, helping it answer the question. + +In addition to enabling complex semantic text retrieval, vector search has other advantages: + +* Understanding similar semantics (e.g., mouse/mousetrap/cheese, Google/Bing/search engine) +* Multilingual understanding (cross-language understanding, such as matching English input with Chinese) +* Multimodal understanding (support for similar matching of text, images, audio, video, etc.) +* Fault tolerance (handling spelling errors and vague descriptions) + +While vector search has clear advantages in the above scenarios, it performs poorly in certain situations, such as: + +* Searching for names of people or objects (e.g., Elon Musk, iPhone 15) +* Searching for abbreviations or phrases (e.g., RAG, RLHF) +* Searching for IDs (e.g., `gpt-3.5-turbo`, `titan-xlarge-v1.01`) + +These weaknesses are precisely the strengths of traditional keyword search, which excels in: + +* Exact matching (e.g., product names, personal names, product numbers) +* Matching with a few characters (vector search performs poorly with few characters, but many users tend to input only a few keywords) +* Matching low-frequency words (low-frequency words often carry significant meaning in language, such as "Would you like to have coffee with me?" where "have" and "coffee" carry more importance than "you" and "like") + +For most text search scenarios, the primary goal is to ensure that the most relevant potential results appear in the candidate results. Vector search and keyword search each have their advantages in the retrieval field. Hybrid search combines the strengths of both search technologies and compensates for their weaknesses. + +In hybrid search, you need to establish vector indexes and keyword indexes in the database in advance. When a user query is input, the most relevant texts are retrieved from the documents using both retrieval methods. + +![Hybrid Search](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/16818764adc7c9e7bfbe4be8fb3fd6ee.png) + +"Hybrid search" does not have a precise definition. This article uses the combination of vector search and keyword search as an example. If we use other combinations of search algorithms, it can also be called "hybrid search." For instance, we can combine knowledge graph techniques for retrieving entity relationships with vector search techniques. + +Different retrieval systems excel at finding various subtle relationships between texts (paragraphs, sentences, words), including exact relationships, semantic relationships, thematic relationships, structural relationships, entity relationships, temporal relationships, event relationships, etc. No single retrieval mode can be suitable for all scenarios. **Hybrid search achieves complementarity between multiple retrieval technologies through the combination of multiple retrieval systems.** + +### Vector Search + +Definition: Generating query embeddings and querying the text segments most similar to their vector representations. + +![Vector Search Settings](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/d9f6c540579ffa7833c8e8fecab13470.png) + +**TopK:** Used to filter the text fragments most similar to the user's query. The system will dynamically adjust the number of fragments based on the context window size of the selected model. The default value is 3. + +**Score Threshold:** Used to set the similarity threshold for filtering text fragments, i.e., only recalling text fragments that exceed the set score. The system's default is to turn off this setting, meaning it does not filter the similarity values of recalled text fragments. When enabled, the default value is 0.5. + +**Rerank Model:** After configuring the Rerank model's API key on the "Model Providers" page, you can enable the "Rerank Model" in the retrieval settings. The system will perform semantic re-ranking on the recalled document results after semantic retrieval to optimize the ranking results. When the Rerank model is set, the TopK and Score Threshold settings only take effect in the Rerank step. + +### Full-Text Search + +Definition: Indexing all words in the document, allowing users to query any word and return text fragments containing those words. + +![Full-Text Search Settings](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/4ba9e7aed96e64da0e6475913041ed55.png) + +**TopK:** Used to filter the text fragments most similar to the user's query. The system will dynamically adjust the number of fragments based on the context window size of the selected model. The default value is 3. + +**Rerank Model:** After configuring the Rerank model's API key on the "Model Providers" page, you can enable the "Rerank Model" in the retrieval settings. The system will perform semantic re-ranking on the recalled document results after full-text retrieval to optimize the ranking results. When the Rerank model is set, the TopK and Score Threshold settings only take effect in the Rerank step. + +### Hybrid Search + +Simultaneously performs full-text search and vector search, applying a re-ranking step to select the best results matching the user's query from both types of query results. Requires configuring the Rerank model API. + +![Hybrid Search Settings](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/60ab88815f84ef92a45ac239481fcefd.png) + +**TopK:** Used to filter the text fragments most similar to the user's query. The system will dynamically adjust the number of fragments based on the context window size of the selected model. The default value is 3. + +**Rerank Model:** After configuring the Rerank model's API key on the "Model Providers" page, you can enable the "Rerank Model" in the retrieval settings. The system will perform semantic re-ranking on the recalled document results after hybrid retrieval to optimize the ranking results. When the Rerank model is set, the TopK and Score Threshold settings only take effect in the Rerank step. + +### Setting Retrieval Mode When Creating a Dataset + +Set different retrieval modes by entering the "Dataset -> Create Dataset" page and configuring the retrieval settings. + +![Setting Retrieval Mode When Creating a Dataset](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/36f3a2e700529cdfdab067acc1340d4e.png) + +### Modifying Retrieval Mode in Dataset Settings + +Modify the retrieval mode of an existing dataset by entering the "Dataset -> Select Dataset -> Settings" page. + +![Modifying Retrieval Mode in Dataset Settings](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/24fc8d33f05253cc83b5f6371b94b84b.png) + +### Modifying Retrieval Mode in Prompt Arrangement + +Modify the retrieval mode when creating an application by entering the "Prompt Arrangement -> Context -> Select Dataset -> Settings" page. + +![Modifying Retrieval Mode in Prompt Arrangement](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/b07e4a2c1345ab676c6e0c107651d0f9.png) diff --git a/en/learn-more/extended-reading/retrieval-augment/rerank.mdx b/en/learn-more/extended-reading/retrieval-augment/rerank.mdx new file mode 100644 index 00000000..0e7bdc00 --- /dev/null +++ b/en/learn-more/extended-reading/retrieval-augment/rerank.mdx @@ -0,0 +1,51 @@ +--- +title: Re-ranking +--- + +### Why is Re-ranking Needed? + +Hybrid search can leverage the strengths of different retrieval technologies to achieve better recall results. However, the query results from different retrieval modes need to be merged and normalized (converting data to a uniform standard range or distribution for better comparison, analysis, and processing) before being provided to the large model together. This is where a scoring system comes in: the Re-rank Model. + +**The re-rank model calculates the semantic match between the list of candidate documents and the user query, reordering them based on semantic match to improve the results of semantic sorting.** The principle is to compute a relevance score between the user query and each candidate document and return a list of documents sorted by relevance from high to low. Common re-rank models include Cohere rerank, bge-reranker, etc. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/292a7f8e50b9b6be6ababd6b84ac322f.png) + +In most cases, there is a preliminary retrieval before re-ranking because calculating the relevance score between a query and millions of documents would be highly inefficient. Therefore, **re-ranking is typically placed at the final stage of the search process and is ideal for merging and sorting results from different retrieval systems.** + +However, re-ranking is not only applicable for merging results from different retrieval systems. Even in a single retrieval mode, introducing a re-ranking step can effectively improve document recall. For example, semantic re-ranking can be added after keyword retrieval. + +In practical applications, besides normalizing multiple query results, we generally limit the number of segments passed to the large model (i.e., TopK, which can be set in the re-rank model parameters) before handing over the relevant text segments to the large model. This is because the input window of the large model has size limitations (typically 4K, 8K, 16K, 128K tokens). You need to choose an appropriate segmentation strategy and TopK value based on the input window size of the selected model. + +It is important to note that even if the model's context window is large enough, recalling too many segments may introduce less relevant content, reducing the quality of the response. Therefore, the TopK parameter for re-ranking is not necessarily the larger, the better. + +Re-ranking is not a replacement for search technology but an auxiliary tool to enhance existing retrieval systems. **Its greatest advantage is that it provides a simple and low-complexity method to improve search results, allowing users to incorporate semantic relevance into existing search systems without significant infrastructure modifications.** + +For example, with Cohere Rerank, you only need to register an account and apply for an API. Integration requires just two lines of code. Additionally, they offer multilingual models, meaning you can sort query results in different languages simultaneously. + +### How to Configure the Re-rank Model? + +Dify currently supports the Cohere Rerank model. You can enter the page and fill in the API key for the Re-rank model: + +![Configure Cohere Rerank Model in Model Providers](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/cf0cd2766490583758d06cafbfb01a40.png) + +### How to Obtain the Cohere Rerank Model? + +Visit: [https://cohere.com/rerank](https://cohere.com/rerank), register on the page, and apply for the Rerank model usage qualification to obtain the API key. + +### Setting the Re-rank Model in Dataset Retrieval Mode + +Enter the "Dataset -> Create Dataset -> Retrieval Settings" page to add the Re-rank settings. Besides setting the Re-rank model when creating a dataset, you can also change the Re-rank configuration in the settings of an existing dataset and in the dataset recall mode settings in application orchestration. + +![Setting the Re-rank Model in Dataset Retrieval Mode](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/ad3bc13e53749ce69de5db196443676f.png) + +**TopK:** Used to set the number of relevant documents returned after re-ranking. + +**Score Threshold:** Used to set the minimum score for relevant documents returned after re-ranking. When the Re-rank model is set, the TopK and Score Threshold settings only take effect in the re-rank step. + +### Setting the Re-rank Model in Multi-Path Recall Mode for Datasets + +Enter the "Prompt Arrangement -> Context -> Settings" page to enable the Re-rank model when setting to multi-path recall mode. + +Explanation about multi-path recall mode: 🔗Please check the section [Multi-path Retrieval](https://docs.dify.ai/guides/knowledge-base/integrate-knowledge-within-application#multi-path-retrieval-recommended) + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/f9cde54ca59bfdc5ad94afb00551d5d0.png) diff --git a/en/learn-more/extended-reading/retrieval-augment/retrieval.mdx b/en/learn-more/extended-reading/retrieval-augment/retrieval.mdx new file mode 100644 index 00000000..253005c9 --- /dev/null +++ b/en/learn-more/extended-reading/retrieval-augment/retrieval.mdx @@ -0,0 +1,20 @@ +--- +title: Retrieval Modes +--- + + +When users build AI applications with multiple knowledge bases, Dify's retrieval strategy will determine which content will be retrieved. + +![retrieval Mode Settings](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/a44e23f00203fd85d16b090403f8fb58.png) + +### Retrieval Setting + +Matches all datasets based on user intent, querying related text fragments from multiple datasets simultaneously. After a re-ranking step, the best results matching the user query are selected from the multi-path query results, requiring a configured Rerank model API. In multi-path retrieval mode, the retriever searches for text content related to the user query across all datasets associated with the application merge the relevant document results from multi-path retrieval and re-ranks the retrieved documents semantically using the Rerank model. + +In multi-path retrieval mode, it's recommended that the Rerank model be configured. + +Below is the technical flowchart for the multi-path retrieval mode: + +![Multi-Path retrieval](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/extended-reading/retrieval-augment/64007d543f1c5c3f2e87d606d79d04d3.png) + +Since multi-path retrieval mode does not rely on the model's inference capability or dataset descriptions, it can achieve higher-quality retrieval results when retrieving across multiple datasets. Additionally, incorporating a re-ranking step can effectively improve document retrieval effectiveness. Therefore, when creating knowledge base Q\&A applications associated with multiple datasets, we recommend configuring the retrieval mode as multi-path retrieval. diff --git a/en/learn-more/extended-reading/what-is-llmops.mdx b/en/learn-more/extended-reading/what-is-llmops.mdx new file mode 100644 index 00000000..3f3cc890 --- /dev/null +++ b/en/learn-more/extended-reading/what-is-llmops.mdx @@ -0,0 +1,80 @@ +--- +title: What is LLMOps? +--- + +LLMOps (Large Language Model Operations) is a comprehensive set of practices and processes that cover the development, deployment, maintenance, and optimization of large language models (such as the GPT series). The goal of LLMOps is to ensure the efficient, scalable, and secure use of these powerful AI models to build and run real-world applications. It involves aspects such as model training, deployment, monitoring, updating, security, and compliance. + +The table below illustrates the differences in various stages of AI application development before and after using Dify: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StepsBeforeAfterSave time
Developing Frontend & Backend for ApplicationsIntegrating and encapsulating LLM capabilities requires a lot of time to develop front-end applications.Directly use Dify' backend services to develop based on a WebApp scaffold.-80%
Prompt EngineeringCan only be done by calling APIs or Playground.Debug based on the user's input data.-25%
Data Preparation and EmbeddingWriting code to implement long text data processing and embedding.Upload text or bind data sources to the platform.-80%
Application Logging and AnalysisWriting code to record logs and accessing databases to view them.The platform provides real-time logging and analysis.-70%
Data Analysis and Fine-TuningTechnical personnel manage data and create fine-tuning queues.Non-technical personnel can collaborate and adjust the model visually.-60%
AI Plugin Development and IntegrationWriting code to create and integrate AI plugins.The platform provides visual tools for creating and integrating plugins.-50%
+ +Before using an LLMOps platform like Dify, the process of developing applications based on LLMs can be cumbersome and time-consuming. Developers need to handle tasks at each stage on their own, which can lead to inefficiencies, difficulties in scaling, and security issues. Here is the development process before using an LLMOps platform: + +1. Data Preparation: Manually collect and preprocess data, which may involve complex data cleaning and annotation work, requiring a significant amount of code. +2. Prompt Engineering: Developers can only write and debug Prompts through API calls or Playgrounds, lacking real-time feedback and visual debugging. +3. Embedding and Context Management: Manually handling the embedding and storage of long contexts, which can be difficult to optimize and scale, requiring a fair amount of programming work and familiarity with model embedding and vector databases. +4. Application Monitoring and Maintenance: Manually collect and analyze performance data, possibly unable to detect and address issues in real-time, and may even lack log records. +5. Model Fine-tuning: Independently manage the fine-tuning data preparation and training process, which can lead to inefficiencies and require more code. +6. System and Operations: Technical personnel involvement or cost required for developing a management backend, increasing development and maintenance costs, and lacking support for collaboration and non-technical users. + +With the introduction of an LLMOps platform like Dify, the process of developing applications based on LLMs becomes more efficient, scalable, and secure. Here are the advantages of developing LLM applications using Dify: + +1. Data Preparation: The platform provides data collection and preprocessing tools, simplifying data cleaning and annotation tasks, and minimizing or even eliminating coding work. +2. Prompt Engineering: WYSIWYG Prompt editing and debugging, allowing real-time optimization and adjustments based on user input data. +3. Embedding and Context Management: Automatically handling the embedding, storage, and management of long contexts, improving efficiency and scalability without the need for extensive coding. +4. Application Monitoring and Maintenance: Real-time monitoring of performance data, quickly identifying and addressing issues, ensuring the stable operation of applications, and providing complete log records. +5. Model Fine-tuning: The platform offers one-click fine-tuning functionality based on previously annotated real-use data, improving model performance and reducing coding work. +6. System and Operations: User-friendly interface accessible to non-technical users, supporting collaboration among multiple team members, and reducing development and maintenance costs. Compared to traditional development methods, Dify offers more transparent and easy-to-monitor application management, allowing team members to better understand the application's operation. + + + Additionally, Dify will provide AI plugin development and integration features, enabling developers to easily create and deploy LLM-based plugins for various applications, further enhancing development efficiency and application value. + +**Dify** is an easy-to-use LLMOps platform designed to empower more people to create sustainable, AI-native applications. With visual orchestration for various application types, Dify offers out-of-the-box, ready-to-use applications that can also serve as Backend-as-a-Service APIs. Unify your development process with one API for plugins and knowledge integration, and streamline your operations using a single interface for prompt engineering, visual analytics, and continuous improvement. + diff --git a/en/learn-more/faq/README.mdx b/en/learn-more/faq/README.mdx new file mode 100644 index 00000000..5d0f2428 --- /dev/null +++ b/en/learn-more/faq/README.mdx @@ -0,0 +1,8 @@ +--- +title: Frequently Asked Questions (FAQs) +--- + + +[Self hosted / local deployment frequently asked questions (FAQs)](https://docs.dify.ai/learn-more/faq/install-faq) + +[LLM configuration and usage frequently asked questions (FAQs)](https://docs.dify.ai/learn-more/faq/use-llms-faq) \ No newline at end of file diff --git a/en/learn-more/faq/install-faq.mdx b/en/learn-more/faq/install-faq.mdx new file mode 100644 index 00000000..e7ea3a04 --- /dev/null +++ b/en/learn-more/faq/install-faq.mdx @@ -0,0 +1,278 @@ +--- +title: Self Host / Local Deployment +--- + + +### 1. How to reset the password if it is incorrect after local deployment initialization? + +If you deployed using Docker Compose, you can reset the password with the following command: + +``` +docker exec -it docker-api-1 flask reset-password +``` + +Enter the account email and the new password twice. + +### 2. How to fix the "File not found" error in local deployment logs? + +``` +ERROR:root:Unknown Error in completion +Traceback (most recent call last): + File "/www/wwwroot/dify/dify/api/libs/rsa.py", line 45, in decrypt + private_key = storage.load(filepath) + File "/www/wwwroot/dify/dify/api/extensions/ext_storage.py", line 65, in load + raise FileNotFoundError("File not found") +FileNotFoundError: File not found +``` + +This error might be due to changing the deployment method or deleting the `api/storage/privkeys` directory. This file is used to encrypt the large model keys, so its loss is irreversible. You can reset the encryption key pair with the following commands: + +* Docker Compose deployment + + ``` + docker exec -it docker-api-1 flask reset-encrypt-key-pair + ``` +* Source code startup + + Navigate to the `api` directory + + ``` + flask reset-encrypt-key-pair + ``` + + Follow the prompts to reset. + +### 3. Unable to log in after installation, or receiving a 401 error on subsequent interfaces after a successful login? + +This might be due to switching the domain/URL, causing cross-domain issues between the frontend and backend. Cross-domain and identity issues involve the following configurations: + +1. CORS Cross-Domain Configuration + 1. `CONSOLE_CORS_ALLOW_ORIGINS` + + Console CORS policy, default is `*`, meaning all domains can access. + 2. `WEB_API_CORS_ALLOW_ORIGINS` + + WebAPP CORS policy, default is `*`, meaning all domains can access. + +### 4. The page keeps loading after startup, and requests show CORS errors? + +This might be due to switching the domain/URL, causing cross-domain issues between the frontend and backend. Update the following configuration items in `docker-compose.yml` to the new domain: + +`CONSOLE_API_URL:` Backend URL for the console API. +`CONSOLE_WEB_URL:` Frontend URL for the console web. +`SERVICE_API_URL:` URL for the service API. +`APP_API_URL:` Backend URL for the WebApp API. +`APP_WEB_URL:` URL for the WebApp. + +For more information, please refer to: [Environment Variables](../../getting-started/install-self-hosted/environments.md) + +### 5. How to upgrade the version after deployment? + +If you started with an image, pull the latest image to complete the upgrade. If you started with source code, pull the latest code and then start it to complete the upgrade. + +For source code deployment updates, navigate to the `api` directory and run the following command to migrate the database structure to the latest version: + +`flask db upgrade` + +### 6. How to configure environment variables when importing using Notion? + +[**Notion Integration Configuration Address**](https://www.notion.so/my-integrations). When performing a private deployment, set the following configurations: + +1. **`NOTION_INTEGRATION_TYPE`**: This value should be configured as **public/internal**. Since Notion's OAuth redirect address only supports https, use Notion's internal integration for local deployment. +2. **`NOTION_CLIENT_SECRET`**: Notion OAuth client secret (for public integration type). +3. **`NOTION_CLIENT_ID`**: OAuth client ID (for public integration type). +4. **`NOTION_INTERNAL_SECRET`**: Notion internal integration secret. If the value of `NOTION_INTEGRATION_TYPE` is **internal**, configure this variable. + +### 7. How to change the name of the space in the local deployment version? + +Modify it in the `tenants` table of the database. + +### 8. Where to modify the domain for accessing the application? + +Find the `APP_WEB_URL` configuration domain in `docker_compose.yaml`. + +### 9. What to back up if a database migration occurs? + +Back up the database, configured storage, and vector database data. If deployed using Docker Compose, directly back up all data in the `dify/docker/volumes` directory. + +### 10. Why can't Docker deployment Dify access the local port using 127.0.0.1 when starting OpenLLM locally? + +127.0.0.1 is the internal address of the container. Dify's configured server address needs to be the host's local network IP address. + +### 11. How to resolve the size and quantity limit of document uploads in the dataset for the local deployment version? + +Refer to the official website [Environment Variables Documentation](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments) for configuration. + +### 12. How to invite members via email in the local deployment version? + +In the local deployment version, invite members via email. After entering the email and sending the invitation, the page will display an invitation link. Copy the invitation link and forward it to the user. The user can open the link, log in via email, set a password, and log in to your space. + +### 13. What to do if you encounter the error "Can't load tokenizer for 'gpt2'" in the local deployment version? + +``` +Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer. +``` + +Refer to the official website [Environment Variables Documentation](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments) for configuration, and the related [Issue](https://github.com/langgenius/dify/issues/1261). + +### 14. How to resolve a port 80 conflict in the local deployment version? + +If port 80 is occupied, stop the service occupying port 80 or modify the port mapping in `docker-compose.yaml` to map port 80 to another port. Typically, Apache and Nginx occupy this port, which can be resolved by stopping these two services. + +### 15. What to do if you encounter the error "[openai] Error: ffmpeg is not installed" during text-to-speech? + +``` +[openai] Error: ffmpeg is not installed +``` + +Since OpenAI TTS implements audio stream segmentation, ffmpeg needs to be installed for source code deployment to work properly. Detailed steps: + +**Windows:** + +1. Visit [FFmpeg Official Website](https://ffmpeg.org/download.html) and download the precompiled Windows shared library. +2. Download and extract the FFmpeg folder, which will generate a folder like "ffmpeg-20200715-51db0a4-win64-static". +3. Move the extracted folder to your desired location, e.g., C:\Program Files\. +4. Add the absolute path of the FFmpeg bin directory to the system environment variables. +5. Open Command Prompt and enter "ffmpeg -version". If you see the FFmpeg version information, the installation is successful. + +**Ubuntu:** + +1. Open Terminal. +2. Enter the following commands to install FFmpeg: `sudo apt-get update`, then `sudo apt-get install ffmpeg`. +3. Enter "ffmpeg -version" to check if the installation is successful. + +**CentOS:** + +1. First, enable the EPEL repository. Enter in Terminal: `sudo yum install epel-release` +2. Then, enter: `sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm` +3. Update yum packages, enter: `sudo yum update` +4. Finally, install FFmpeg, enter: `sudo yum install ffmpeg ffmpeg-devel` +5. Enter "ffmpeg -version" to check if the installation is successful. + +**Mac OS X:** + +1. Open Terminal. +2. If you haven't installed Homebrew, you can install it by entering the following command in Terminal: `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"` +3. Use Homebrew to install FFmpeg, enter: `brew install ffmpeg` +4. Enter "ffmpeg -version" to check if the installation is successful. + +### 16. How to resolve an Nginx configuration file mount failure during local deployment? + +``` +Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/run/desktop/mnt/host/d/Documents/docker/nginx/nginx.conf" to rootfs at "/etc/nginx/nginx.conf": mount /run/desktop/mnt/host/d/Documents/docker/nginx/nginx.conf:/etc/nginx/nginx.conf (via /proc/self/fd/9), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type +``` + +Download the complete project, navigate to the docker directory, and execute `docker-compose up -d`. + +``` +git clone https://github.com/langgenius/dify.git +cd dify/docker +docker compose up -d +``` + +### 17. Migrate weaviate to another vector database + +To migrate from Weaviate to another vector database, follow these steps: + +1. For local source code deployment: + - Update the vector database setting in the `.env` file + - Example: Set `VECTOR_STORE=qdrant` to migrate to Qdrant + +2. For Docker Compose deployment: + - Update the vector database settings in `docker-compose.yaml` + - Make sure to modify both API and worker service configurations + +``` +# The type of vector store to use. Supported values are `weaviate`, `qdrant`, `milvus`, `analyticdb`. +VECTOR_STORE: weaviate +``` + +3. Execute the below command in your terminal or docker container + +``` +flask vdb-migrate # or docker exec -it docker-api-1 flask vdb-migrate +``` + +**Tested target database:** + +- qdrant +- milvus +- analyticdb + +### 18. Why is SSRF_PROXY needed? + +In the community edition's `docker-compose.yaml`, you might notice some services configured with `SSRF_PROXY` and `HTTP_PROXY` environment variables, all pointing to an `ssrf_proxy` container. This is to prevent SSRF attacks. For more information on SSRF attacks, you can read [this article](https://portswigger.net/web-security/ssrf). + +To avoid unnecessary risks, we configure a proxy for all services that might cause SSRF attacks and force services like Sandbox to only access external networks through the proxy, ensuring your data and service security. By default, this proxy does not intercept any local requests, but you can customize the proxy behavior by modifying the `squid` configuration file. + +#### How to customize the proxy behavior? + +In `docker/volumes/ssrf_proxy/squid.conf`, you can find the `squid` configuration file. You can customize the proxy behavior here, such as adding ACL rules to restrict proxy access or adding `http_access` rules to restrict proxy access. For example, your local network can access the `192.168.101.0/24` segment, but `192.168.101.19` has sensitive data that you don't want local deployment Dify users to access, but other IPs can. You can add the following rules in `squid.conf`: + +``` +acl restricted_ip dst 192.168.101.19 +acl localnet src 192.168.101.0/24 + +http_access deny restricted_ip +http_access allow localnet +http_access deny all +``` + +This is just a simple example. You can customize the proxy behavior according to your needs. If your business is more complex, such as needing to configure an upstream proxy or cache, you can refer to the [squid configuration documentation](http://www.squid-cache.org/Doc/config/) for more information. + +### 19. How to set your created application as a template? + +Currently, it is not supported to set your created application as a template. The existing templates are provided by Dify official for cloud version users to refer to. If you are using the cloud version, you can add applications to your workspace or customize them after modification to create your own applications. If you are using the community version and need to create more application templates for your team, you can contact our business team for paid technical support: [business@dify.ai](mailto:business@dify.ai) + +### 20. 502 Bad Gateway + +This is because Nginx is forwarding the service to the wrong location. First, ensure the container is running, then run the following command with root privileges: + +``` +docker ps -q | xargs -n 1 docker inspect --format '{{ .Name }}: {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' +``` + +Find these two lines in the output: + +``` +/docker-web-1: 172.19.0.5 +/docker-api-1: 172.19.0.7 +``` + +Remember the IP addresses. Then open the directory where you store the Dify source code, open `dify/docker/nginx/conf.d`, replace `http://api:5001` with `http://172.19.0.7:5001`, and replace `http://web:3000` with `http://172.19.0.5:3000`, then restart the Nginx container or reload the configuration. + +These IP addresses are _**examples**_, you must execute the command to get your own IP addresses, do not fill them in directly. You might need to reconfigure the IP addresses when restarting the relevant containers. + +### 21. How to modify the API service port number? + +The API service port is consistent with the one used by the Dify platform. You can reassign the running port by modifying the `nginx` configuration in the `docker-compose.yaml` file. + +### 22. How to Migrate from Local to Cloud Storage? + +To migrate files from local storage to cloud storage (e.g., Alibaba Cloud OSS), you'll need to transfer data from the 'upload_files' and 'privkeys' folders. Follow these steps: + +1. Configure Storage Settings + + For local source code deployment: + - Update storage settings in `.env` file + - Set `STORAGE_TYPE=aliyun-oss` + - Configure Alibaba Cloud OSS credentials + + For Docker Compose deployment: + - Update storage settings in `docker-compose.yaml` + - Set `STORAGE_TYPE: aliyun-oss` + - Configure Alibaba Cloud OSS credentials + +2. Execute Migration Commands + + For local source code: + ```bash + flask upload-private-key-file-to-cloud-storage + flask upload-local-files-to-cloud-storage + ``` + + For Docker Compose: + ```bash + docker exec -it docker-api-1 flask upload-private-key-file-to-cloud-storage + docker exec -it docker-api-1 flask upload-local-files-to-cloud-storage + ``` diff --git a/en/learn-more/faq/plugins.mdx b/en/learn-more/faq/plugins.mdx new file mode 100644 index 00000000..a7935e28 --- /dev/null +++ b/en/learn-more/faq/plugins.mdx @@ -0,0 +1,16 @@ +--- +title: Plugins +--- + + +> The following issue and solution apply to version `1.0.0` **Community Edition**. + +#### How to Handle Errors When Installing Plugins? + +**Issue**: If you encounter the error message: `plugin verification has been enabled, and the plugin you want to install has a bad signature`, how to handle the issue? + +**Solution**: Add the following line to the end of your `.env` configuration file: `FORCE_VERIFYING_SIGNATURE=false` + +Once this field is added, the Dify platform will allow the installation of all plugins that are not listed (and thus not verified) in the Dify Marketplace. + +**Note**: For security reasons, always install plugins from unknown sources in a test or sandbox environment first. Confirm their safety before deploying to the production environment. diff --git a/en/learn-more/faq/use-llms-faq.mdx b/en/learn-more/faq/use-llms-faq.mdx new file mode 100644 index 00000000..36a48b3b --- /dev/null +++ b/en/learn-more/faq/use-llms-faq.mdx @@ -0,0 +1,149 @@ +--- +title: LLM Configuration and Usage +--- + + +### 1. How to access OpenAI via a proxy server in China? + +Dify supports custom API domain names for OpenAI and any large model API server compatible with OpenAI. In the community edition, you can fill in the target server address through **Settings --> Model Providers --> OpenAI --> Edit API**. + +### 2. How to choose a base model? + +* **gpt-3.5-turbo**: gpt-3.5-turbo is an upgraded version of the gpt-3 model series. It is more powerful than gpt-3 and can handle more complex tasks. It has significant improvements in understanding long texts and cross-document reasoning. gpt-3.5-turbo can generate more coherent and persuasive text. It has also greatly improved in summarization, translation, and creative writing. Specializes in: **Long text understanding, cross-document reasoning, summarization, translation, creative writing.** +* **gpt-4**: gpt-4 is the latest and most powerful Transformer language model. It has approximately 20 billion pre-trained parameters, making it top-notch in all language tasks, especially those requiring deep understanding and generation of long, complex responses. gpt-4 can handle all aspects of human language, including understanding abstract concepts and cross-page reasoning. gpt-4 is the first truly universal language understanding system capable of handling any natural language processing task within the AI domain. Specializes in: **All NLP tasks, language understanding, long text generation, cross-document reasoning, abstract concept understanding.** For more details, refer to the [documentation](https://platform.openai.com/docs/models/overview). + +### 3. Why is it recommended to set max_tokens smaller? + +In natural language processing, longer text outputs usually require more computation time and resources. Therefore, limiting the length of the output text can reduce computational cost and time to some extent. For example, setting max_tokens=500 means only considering the first 500 tokens of the output text, and any part beyond this length will be discarded. This ensures that the output text length does not exceed the LLM's acceptable range and optimizes computational resources, improving model efficiency. Additionally, setting a smaller max_tokens allows for a longer prompt. For instance, gpt-3.5-turbo has a limit of 4097 tokens; if max_tokens=4000, only 97 tokens are left for the prompt, and exceeding this will cause an error. + +### 4. How to reasonably split long texts in datasets? + +In some natural language processing applications, texts are typically split by paragraphs or sentences to better handle and understand the semantic and structural information in the text. The smallest splitting unit depends on the specific task and technical implementation. For example: + +* For text classification tasks, texts are usually split by sentences or paragraphs. +* For machine translation tasks, entire sentences or paragraphs are used as splitting units. + +Finally, experiments and evaluations are needed to determine the most suitable embedding technique and splitting unit. You can compare the performance of different techniques and splitting units on the test set and choose the optimal solution. + +### 5. What distance function do we use for dataset segmentation? + +We use [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity). The choice of distance function is generally not critical. OpenAI embeddings are normalized to a length of 1, which means: + +Using dot product can slightly speed up the calculation of cosine similarity + +Cosine similarity and Euclidean distance will result in the same ranking + +* If normalized embedding vectors are used to calculate cosine similarity or Euclidean distance and vectors are ranked based on these similarity measures, the ranking results will be the same. This is because, after normalization, the length of the vectors no longer affects their relative relationships; only directional information is retained. Therefore, when using normalized vectors for similarity measurement, different measurement methods will yield the same ranking results. After normalization, all vectors are scaled to a length of 1, meaning they all lie on the unit length. Unit vectors describe only direction without magnitude, as their length is always 1. _For specific principles, you can ask ChatGPT._ + +When embedding vectors are normalized to a length of 1, calculating the cosine similarity between two vectors can be simplified to their dot product. Since the normalized vector lengths are all 1, the dot product result is equivalent to the cosine similarity result. Given that dot product operations are faster than other similarity measures (like Euclidean distance), using normalized vectors for dot product calculations can slightly improve computational efficiency. + +### 6. How to get free trial quotas for Zhipu·AI, iFlytek Spark, and MiniMax models? + +We collaborate with major model providers to offer a certain amount of free token trial quotas to Chinese users. Through Dify **Settings --> Model Providers --> Show more model providers**, click "Get Free" on the Zhipu·AI, iFlytek Spark, or MiniMax icons. If you can't see the entrance in the English interface, switch the product language to Chinese: + +* **Zhipu·AI: Get 10 million tokens for free.** Click "Get Free", enter your phone number and verification code to receive the quota, regardless of whether you have registered with Zhipu·AI before. +* **iFlytek Spark (V1.5 model, V2.0 model): Get 6 million tokens for free, 3 million tokens for each model, quotas are not interchangeable**. Enter through Dify, complete the registration on iFlytek Spark's open platform (only for phone numbers not previously registered with iFlytek Spark), return to Dify, wait for 5 minutes, and refresh the page to see the available quota. +* **MiniMax: Get 1 million tokens for free.** Click "Get Free" to receive the quota without manual registration, regardless of whether you have registered with MiniMax before. + +Once the trial quota is credited, select the model you need to use in **Prompt Arrangement --> Model and Parameters --> Language Model**. + +### 7. When filling in the OpenAI key, the validation failed with the error: "Validation failed: You exceeded your current quota, please check your plan and billing details." What is the reason? + +This indicates that your OpenAI key's account has run out of funds. Please go to OpenAI to recharge. + +### 8. When using OpenAI's key for conversation in the application, I encountered the following errors. What is the reason? + +Error one: + +```JSON +The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application +``` + +Error two: + +```JSON +Rate limit reached for default-gpt-3.5-turbo in organization org-wDrZCxxxxxxxxxissoZb on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method. +``` + +Please check if you have reached the official API call rate limit. Refer to the [OpenAI official documentation](https://platform.openai.com/docs/guides/rate-limits) for details. + +### 9. After user self-deployment, Zhichat is not available, and the error is as follows: "Unrecognized request argument supplied: functions." How to resolve this? + +First, check if the front-end and back-end versions are the latest and consistent. Second, this error may occur because you are using an Azure OpenAI key but have not successfully deployed the model. Check if the model is deployed in your Azure OpenAI. The gpt-3.5-turbo model version must be 0613 or above (as versions before 0613 do not support the function call capability used by Zhichat, making it unusable). + +### 10. When setting the OpenAI key, the error is as follows. What is the reason? + +```JSON +Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by NewConnectionError(; Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')) +``` + +Usually, this is due to your environment setting a proxy. Please check if a proxy is set. + +### 11. When switching models in the application, I encountered the following error. How to resolve this? + +```JSON +Anthropic: Error code: 400 - 'error': 'type': "invalid request error, 'message': 'temperature: range: -1 or 0..1) +``` + +Each model has different parameter values. Set the parameter values according to the current model's range. + +### 12. I encountered the following error. How to resolve this? + +```JSON +Query or prefix prompt is too long, you can reduce the prefix prompt, or shrink the max token, or switch to a llm with a larger token limit size +``` + +In the parameter settings on the orchestration page, reduce the value of "max token." + +### 13. What is the default model in Dify, and can open-source models be used? + +The default model can be configured in **Settings - Model Providers**. Currently, it supports text generation models from providers like OpenAI / Azure OpenAI / Anthropic, and also supports integration of open-source models hosted on Hugging Face / Replicate / xinference. + +### 14. In the community edition, why does the dataset's **Q&A Segmentation Mode** keep showing "queued"? + +Check if the API key for the Embedding model you are using has reached the rate limit. + +### 15. When users encounter the error "Invalid token" while using the application, how to resolve it? + +If you encounter the error "Invalid token," try the following solutions: + +* Clear the browser cache (Cookies, Session Storage, and Local Storage). If using a mobile app, clear the corresponding app's cache and re-access it. +* Generate a new App URL and re-enter the URL. + +### 16. What are the size limits for uploading dataset documents? + +Currently, the maximum size for a single document upload is 15MB, with a total document limit of 100. If you need to adjust these limits for a locally deployed version, refer to [documentation](https://docs.dify.ai/v/zh-hans/getting-started/faq/install-faq#11.-ben-di-bu-shu-ban-ru-he-jie-jue-shu-ju-ji-wen-dang-shang-chuan-de-da-xiao-xian-zhi-he-shu-liang). + +### 17. Why does choosing the Claude model still consume OpenAI's quota? + +Because Claude does not support the Embedding model, the Embedding process and other dialogue generation by default use OpenAI's key, thus consuming OpenAI's quota. You can also set other default inference models and Embedding models in **Settings - Model Providers**. + +### 18. How can I control the use of more contextual data rather than the model's own generation capabilities? + +Whether to use the dataset depends on the dataset's description. Make the dataset description as clear as possible. For specific writing techniques, refer to [this documentation](https://docs.dify.ai/v/zh-hans/advanced/datasets). + +### 19. When uploading dataset documents in Excel, how to better segment them? + +Set the header in the first row, and display content in each subsequent row without additional header settings or complex table formats. + +For example, in the table below, only retain the second row's header. The first row (Table 1) is an extra header and should be removed. + +### 20. Why can't I use GPT-4 in Dify even though I bought ChatGPT Plus? + +OpenAI's GPT-4 model API and ChatGPT Plus are two separate products with separate charges. The model API has its own pricing. Refer to [OpenAI pricing documentation](https://openai.com/pricing). To apply for paid access, you must first bind a card. Binding a card grants GPT-3.5 access but not GPT-4 access. GPT-4 access requires a paid bill. Refer to [OpenAI official documentation](https://platform.openai.com/account/billing/overview) for details. + +### 21. How to add other Embedding Models? + +Dify supports the following for use as Embedding models. Simply select the `Embeddings` type in the configuration box. + +* Azure +* LocalAI +* MiniMax +* OpenAI +* Replicate +* XInference +* GPUStack + +### 22. How to set an application I created as an application template? + +This feature provides application templates for cloud version users to reference, and currently does not support setting your created applications as templates. If you use the cloud version, you can **Add to Workspace** or **Customize** it to become your own application. If you use the community version and need to create more application templates for your team, you can contact our commercialization team for paid technical support: `business@dify.ai`. diff --git a/en/learn-more/how-to-use-json-schema-in-dify.mdx b/en/learn-more/how-to-use-json-schema-in-dify.mdx new file mode 100644 index 00000000..3753e9fa --- /dev/null +++ b/en/learn-more/how-to-use-json-schema-in-dify.mdx @@ -0,0 +1,237 @@ +--- +title: How to Use JSON Schema Output in Dify +--- + + +JSON Schema is a specification for describing JSON data structures. Developers can define JSON Schema structures to specify that LLM outputs strictly adhere to the defined data or content, such as generating clear document or code structures. + +## Models Supporting JSON Schema Functionality + +- `gpt-4o-mini-2024-07-18` and later versions +- `gpt-4o-2024-08-06` and later versions + +> For more information on the structured output capabilities of OpenAI series models, please refer to [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs/introduction). + +## Usage of Structured Outputs + +1. Connect the LLM to tools, functions, data, and other components within the system. Set `strict: true` in the function definition. When enabled, the Structured Outputs feature ensures that the parameters generated by the LLM for function calls precisely match the JSON schema you provided in the function definition. + +2. When the LLM responds to users, it outputs content in a structured format according to the definitions in the JSON Schema. + +## Enabling JSON Schema in Dify + +Switch the LLM in your application to one of the models supporting JSON Schema output mentioned above. Then, in the settings form, enable `JSON Schema` and fill in the JSON Schema template. Simultaneously, enable the `response_format` column and switch it to the `json_schema` format. + +![](../../../img/learn-more-json-schema.png) + +The content generated by the LLM supports output in the following format: + +- **Text:** Output in text format + +## Defining JSON Schema Templates + +You can refer to the following JSON Schema format to define your template content: + +```json +{ + "name": "template_schema", + "description": "A generic template for JSON Schema", + "strict": true, + "schema": { + "type": "object", + "properties": { + "field1": { + "type": "string", + "description": "Description of field1" + }, + "field2": { + "type": "number", + "description": "Description of field2" + }, + "field3": { + "type": "array", + "description": "Description of field3", + "items": { + "type": "string" + } + }, + "field4": { + "type": "object", + "description": "Description of field4", + "properties": { + "subfield1": { + "type": "string", + "description": "Description of subfield1" + } + }, + "required": ["subfield1"], + "additionalProperties": false + } + }, + "required": ["field1", "field2", "field3", "field4"], + "additionalProperties": false + } +} +``` + + +Step-by-step guide: + +1. Define basic information: + - Set `name`: Choose a descriptive name for your schema. + - Add `description`: Briefly explain the purpose of the schema. + - Set `strict`: true to ensure strict mode. + +2. Create the `schema` object: + - Set `type: "object"` to specify the root level as an object type. + - Add a `properties` object to define all fields. + +3. Define fields: + - Create an object for each field, including `type` and `description`. + - Common types: `string`, `number`, `boolean`, `array`, `object`. + - For arrays, use `items` to define element types. + - For objects, recursively define `properties`. + +4. Set constraints: + - Add a `required` array at each level, listing all required fields. + - Set `additionalProperties: false` at each object level. + +5. Handle special fields: + - Use `enum` to restrict optional values. + - Use `$ref` to implement recursive structures. + +## Example + +### 1. Chain of thought(routine) + +**JSON Schema Example** + +```json +{ + "name": "math_reasoning", + "description": "Records steps and final answer for mathematical reasoning", + "strict": true, + "schema": { + "type": "object", + "properties": { + "steps": { + "type": "array", + "description": "Array of reasoning steps", + "items": { + "type": "object", + "properties": { + "explanation": { + "type": "string", + "description": "Explanation of the reasoning step" + }, + "output": { + "type": "string", + "description": "Output of the reasoning step" + } + }, + "required": ["explanation", "output"], + "additionalProperties": false + } + }, + "final_answer": { + "type": "string", + "description": "The final answer to the mathematical problem" + } + }, + "additionalProperties": false, + "required": ["steps", "final_answer"] + } +} +``` + +**Prompts** + +```text +You are a helpful math tutor. You will be provided with a math problem, +and your goal will be to output a step by step solution, along with a final answer. +For each step, just provide the output as an equation use the explanation field to detail the reasoning. +``` + +### UI generation(root recursion mode) + +**JSON Schema Example** + +```json +{ + "name": "ui", + "description": "Dynamically generated UI", + "strict": true, + "schema": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "The type of the UI component", + "enum": ["div", "button", "header", "section", "field", "form"] + }, + "label": { + "type": "string", + "description": "The label of the UI component, used for buttons or form fields" + }, + "children": { + "type": "array", + "description": "Nested UI components", + "items": { + "$ref": "#" + } + }, + "attributes": { + "type": "array", + "description": "Arbitrary attributes for the UI component, suitable for any element", + "items": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "The name of the attribute, for example onClick or className" + }, + "value": { + "type": "string", + "description": "The value of the attribute" + } + }, + "additionalProperties": false, + "required": ["name", "value"] + } + } + }, + "required": ["type", "label", "children", "attributes"], + "additionalProperties": false + } + } +``` + +**Prompts** + +```text +You are a UI generator AI. Convert the user input into a UI. +``` + +**Example Output:** + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/03c1132f714cbd5920f5b0f80287a07d.png) + +## Tips + +- Ensure that the application prompt includes instructions on how to handle cases where user input cannot produce a valid response. + +- The model will always attempt to follow the provided schema. If the input content is completely unrelated to the specified schema, it may cause the LLM to generate hallucinations. + +- If the LLM detects that the input is incompatible with the task, you can include language in the prompt to specify returning empty parameters or specific sentences. + +- All fields must be `required`, For details, please refer to [Supported Schemas](https://platform.openai.com/docs/guides/structured-outputs/supported-schemas). + +- [additionalProperties: false](https://platform.openai.com/docs/guides/structured-outputs/additionalproperties-false-must-always-be-set-in-objects) must always be set in objects. + +- The root level object of the schema must be an object. + +## Appendix + +- [Introduction to Structured Outputs](https://cookbook.openai.com/examples/structured_outputs_intro) + +- [Structured Output](https://platform.openai.com/docs/guides/structured-outputs/json-mode?context=without_parse) diff --git a/en/learn-more/prompt-engineering/README.mdx b/en/learn-more/prompt-engineering/README.mdx new file mode 100644 index 00000000..1327523b --- /dev/null +++ b/en/learn-more/prompt-engineering/README.mdx @@ -0,0 +1,32 @@ +--- +title: Designing Prompts & Orchestrating Applications +--- + + +Master how to use Dify to orchestrate applications and practice Prompt Engineering. By leveraging two built-in application types, you can build high-value AI applications. + +Dify's core philosophy is the declarative definition of AI applications. Everything, including prompts, context, and plugins, can be described through a YAML file (hence the name Dify). The final output is a single API or a ready-to-use WebApp. + +At the same time, Dify provides an easy-to-use prompt orchestration interface, allowing developers to visually orchestrate various application features based on prompts. Sounds simple, right? + +Whether the AI application is simple or complex, a good prompt can effectively improve the model's output quality, reduce error rates, and meet specific scenario requirements. Dify already offers two common application types: conversational and text generation. This chapter will guide you through orchestrating AI applications in a visual manner. + +### Steps for Application Orchestration + +1. Determine the application scenario and functional requirements +2. Design and test prompts and model parameters +3. Orchestrate prompts with user input +4. Publish the application +5. Observe and continuously iterate + +### Understanding the Differences Between Application Types + +Text generation applications and conversational applications in Dify have slight differences in prompt orchestration. Conversational applications need to incorporate the "conversation lifecycle" to meet more complex user scenarios and context management requirements. + +Prompt Engineering has evolved into a promising field worth continuous exploration. Continue reading to learn the orchestration guidelines for the two types of applications. + +### Further Reading + +1. [Learn Prompting](https://learnprompting.org/zh-Hans/) +2. [ChatGPT Prompt Engineering for Developers](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/) +3. [Awesome ChatGPT Prompts](https://github.com/f/awesome-chatgpt-prompts) \ No newline at end of file diff --git a/en/learn-more/prompt-engineering/prompt-engineering-1/README.mdx b/en/learn-more/prompt-engineering/prompt-engineering-1/README.mdx new file mode 100644 index 00000000..012bf284 --- /dev/null +++ b/en/learn-more/prompt-engineering/prompt-engineering-1/README.mdx @@ -0,0 +1,236 @@ +--- +title: Expert Mode for Prompt Engineering (Discontinued) +--- + + +When creating an app on Dify, the default orchestration mode is **Simple Mode**, which is ideal for non-technical users who want to quickly create applications like a company knowledge base chatbot or an article summarizer. Using **Simple Mode**, you can orchestrate pre-prompt phrases, add variables, and context with simple steps to publish a complete application (refer to 👉[conversation-application.md](../../../../guides/application\_orchestrate/conversation-application.md "mention")). + +However, if you are a technical user proficient in using **OpenAI's** **Playground** and want to create a learning tutor application that requires embedding different contexts and variables into the prompts for various teaching modules, you can choose **Expert Mode**. In this mode, you can freely write complete prompts, including modifying built-in prompts, adjusting the position of context and chat history within the prompts, and setting necessary parameters. If you are familiar with both Chat and Complete models, **Expert Mode** allows you to quickly switch between these models to meet your needs, and both are suitable for conversational and text generation applications. + +Before you start experimenting with the new mode, you need to know the essential elements of **Expert Mode**: + +* **Text Completion Model** ![](../../../../.gitbook/assets/screenshot-20231017-092613.png) + + When selecting a model, the name with COMPLETE on the right is a text completion model. This model accepts a free-form text string called a "prompt" and generates a text completion that tries to match any context or pattern you give it. For example, if your prompt is: "As Descartes said, I think therefore," it will likely return "I am" as the completion. +* **Chat Model** + + When selecting a model, the name with CHAT on the right is a chat model. This model takes a list of messages as input and returns a generated message as output. Although the chat format is designed to simplify multi-turn conversations, it is also useful for single-turn tasks without any conversation. Chat models use chat messages as input and output, including three types of messages: SYSTEM, USER, and ASSISTANT: + + * `SYSTEM` + * System messages help set the behavior of the AI assistant. For example, you can modify the AI assistant's personality or provide specific instructions on how it should behave throughout the conversation. System messages are optional, and the model's behavior without system messages may be similar to using a generic message like "You are a helpful assistant." + * `USER` + * User messages provide requests or comments for the AI assistant to respond to. + * `ASSISTANT` + * Assistant messages store previous assistant responses but can also be written by you to provide examples of the desired behavior. +* **Stop Sequences** + + These are specific words, phrases, or characters used to signal the LLM to stop generating text. +* **Content Blocks in Expert Mode Prompts** + * + + In an app configured with a dataset, the user inputs a query, and the app uses this query as a retrieval condition for the dataset. The retrieved results are organized and replace the `context` variable, allowing the LLM to reference the context content to provide an answer. + * + + The query content is only available in text completion models for conversational applications. The content input by the user in the conversation will replace this variable, triggering a new round of dialogue. + * + + Conversation history is only available in text completion models for conversational applications. During multiple conversations in a conversational application, Dify assembles and concatenates the historical conversation records according to built-in rules and replaces the `conversation history` variable. The Human and Assistant prefixes can be modified by clicking the `...` after `conversation history`. +* **Initial Template** + + In **Expert Mode**, before formal orchestration, the prompt box provides an initial template that you can directly modify to make more customized requests to the LLM. Note: There are differences based on the type of application and mode. + + For details, please refer to 👉[prompt-engineering-template.md](prompt-engineering-template.md "mention") + +## Comparison of Two Modes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Comparison DimensionSimple ModeExpert Mode
Built-in Prompt VisibilityEncapsulated and InvisibleOpen and Visible
Automatic OrchestrationAvailableUnavailable
Difference in Text Completion and Chat Model SelectionNoneDifferent orchestration after selecting text completion and chat models
Variable InsertionAvailableAvailable
Content Block ValidationNoneAvailable
SYSTEM / USER / ASSISTANT Message Type OrchestrationNoneAvailable
Context Parameter SettingsConfigurableConfigurable
View PROMPT LOGView full prompt logView full prompt log
Stop Sequences Parameter SettingsNoneConfigurable
+## Operating Instructions + +### 1. How to Enter Expert Mode + +After creating an application, you can switch to **Expert Mode** on the prompt orchestration page, where you can edit the complete application prompts. + +![Expert Mode Entry](../../../../.gitbook/assets/专家模式.png) + + +After modifying prompts and publishing the application in **Expert Mode**, you cannot return to **Simple Mode**. + + +### 2. Modify Inserted Context Parameters + +In both **Simple Mode** and **Expert Mode**, you can modify the parameters for the inserted context, including **TopK** and **Score Threshold**. + + +Note that the built-in prompt containing \{{#context#\}} will only be displayed in **Expert Mode** after uploading the context. + + +![Context Parameter Settings](../../../../.gitbook/assets/参数设置.png) + +**TopK: Value range is an integer from 1 to 10** + +Used to filter text fragments with the highest similarity to the user's question. The system will dynamically adjust the number of fragments based on the context window size of the selected model. The default value is 2. It is recommended to set this value between 2 and 5, as we expect to get answers that better match the embedded context. + +**Score Threshold: Value range is a floating-point number with two decimal places from 0 to 1** + +Used to set the similarity threshold for filtering text fragments, i.e., only recalling text fragments that exceed the set score (you can view the hit score of each fragment in the "Hit Test"). The system defaults to this setting being off, meaning it will not filter the recalled text fragments by similarity value. When turned on, the default value is 0.7. It is recommended to keep this setting off by default, but if you require more precise responses, you can set a higher value (the maximum value is 1, but it is not recommended to set it too high). + +### 3. Set **Stop Sequences** + +We do not want the LLM to generate unnecessary content, so specific words, phrases, or characters (default setting is `Human:`) need to be set to inform the LLM to stop generating text. + +For example, if you write a _Few-Shot_ prompt: + +``` +Human1: What color is the sky? +Assistant1: The sky is blue. +Human1: What color is fire? +Assistant1: Fire is red. +Human1: What color is soil? +Assistant1: +``` + +Then in the model parameters' `Stop Sequences`, input `Human1:`, and press the "Tab" key. + +This way, the LLM will only respond with one sentence: + +``` +Assistant1: Soil is yellow. +``` + +And will not generate additional dialogue (i.e., the LLM will stop generating content before reaching the next "Human1:"). + +### 4. Quick Insert Variables and Content Blocks + +In **Expert Mode**, you can type "`/`" in the text editor to quickly bring up content blocks to insert into the prompt. Content blocks include: `context`, `variable`, `conversation history`, `query content`. You can also type "`{`" to quickly insert a list of previously created variables. + +![Shortcut Key “/”](../../../../.gitbook/assets/快捷键.png) + + +Content blocks other than "variables" cannot be inserted repeatedly. The available content blocks may vary based on the prompt template structure in different applications and models. `Conversation history` and `query content` are only available in text completion models for conversational applications. + + +### 5. Input Pre-prompt + +The initial template of the system's prompt provides necessary parameters and LLM response requirements. For details, see 👉[prompt-engineering-template.md](prompt-engineering-template.md "mention"). + +The core of early orchestration by developers is the pre-prompt, which needs to be edited and inserted into the built-in prompt. The suggested insertion position is as follows (taking the creation of an "iPhone Consultation Customer Service" as an example): + +``` +When answering the user: +- If you don't know, just say that you don't know. +- If you don't know or are not sure, ask for clarification. +Avoid mentioning that you obtained the information from the context. +And answer according to the language of the user's question. + +You are a customer service assistant for Apple Inc., and you can provide consultation services for iPhones. +When you answer, you need to list detailed iPhone parameters, and you must output this information as a vertical MARKDOWN table. If the list is too long, transpose it. +You are allowed to think for a long time to generate a more reasonable output. +Note: You currently only have information on some iPhone models, not all of them. +``` + +Of course, you can also customize the initial template, for example, if you want the LLM's responses to be in English, you can modify the built-in prompt as follows: + +``` +When answering the user: +- If you don't know, just say that you don't know. +- If you don't know or are not sure, ask for clarification. +Avoid mentioning that you obtained the information from the context. +And answer according to the language English. +``` + +### 6. Debug Logs + +During orchestration debugging, you can not only view the user's input and the LLM's response. In **Expert Mode**, click the icon at the top left of the send message button to see the complete prompt, making it easier for developers to confirm whether the input variable content, context, chat history, and query content meet expectations. For a detailed explanation of the log list, please refer to the log documentation 👉: [logs.md](../../../../guides/biao-zhu/logs.md "mention") + +#### 6.1 **View Debug Logs** + +In the debug preview interface, after a conversation between the user and the AI, move the mouse pointer to any user session, and you will see the "Log" icon button at the top left. Click it to view the prompt log. + +![Debug Log Entry](../../../../.gitbook/assets/日志.png) + +In the log, you can clearly see: + +* The complete built-in prompt +* Relevant text fragments referenced in the current session +* Historical conversation records + +![View Prompt Log in Debug Preview Interface](../../../../.gitbook/assets/11.png) + +From the log, you can see the complete prompt sent to the LLM after system assembly and continuously improve the prompt input based on the debugging results. + +#### **6.2 Trace Debug History** + +On the main interface for initial app construction, you can see "Logs and Annotations" in the left navigation bar. Click it to view the complete logs. On the main interface of Logs and Annotations, click any conversation log entry, and in the pop-up right dialog box, move the mouse pointer to the conversation to click the "Log" button to view the prompt log. + +![View Prompt Log in Logs and Annotations Interface](../../../../.gitbook/assets/12.png) \ No newline at end of file diff --git a/en/learn-more/prompt-engineering/prompt-engineering-1/prompt-engineering-template.mdx b/en/learn-more/prompt-engineering/prompt-engineering-1/prompt-engineering-template.mdx new file mode 100644 index 00000000..e7e7ce4f --- /dev/null +++ b/en/learn-more/prompt-engineering/prompt-engineering-1/prompt-engineering-template.mdx @@ -0,0 +1,151 @@ +--- +title: Initial Prompt Template References +--- + + +To meet the more customized requirements of LLMs for developers, Dify fully opens up the complete prompts in **Expert Mode** and provides initial templates in the orchestration interface. Here are references for four initial templates: + +### 1. Template for Building Conversational Applications Using Chat Models + +* **SYSTEM** + +``` +Use the following context as your learned knowledge, inside XML tags. + + +{{#context#}} + + +When answering the user: +- If you don't know, just say that you don't know. +- If you are not sure, ask for clarification. +Avoid mentioning that you obtained the information from the context. +And answer according to the language of the user's question. +{{pre_prompt}} +``` + +* **USER** + +``` +{{Query}} // Input the query variable here +``` + +* **ASSISTANT** + +```Python +"" +``` + +#### **Template Structure:** + +* Context (`Context`) +* Pre-prompt (`Pre-prompt`) +* Query Variable (`Query`) + +### 2. Template for Building Text Generation Applications Using Chat Models + +* **SYSTEM** + +``` +Use the following context as your learned knowledge, inside XML tags. + + +{{#context#}} + + +When answering the user: +- If you don't know, just say that you don't know. +- If you are not sure, ask for clarification. +Avoid mentioning that you obtained the information from the context. +And answer according to the language of the user's question. +{{pre_prompt}} +``` + +* **USER** + +``` +{{Query}} // Input the query variable here, commonly in the form of a paragraph +``` + +* **ASSISTANT** + +```Python +"" +``` + +#### **Template Structure:** + +* Context (`Context`) +* Pre-prompt (`Pre-prompt`) +* Query Variable (`Query`) + +### 3. Template for Building Conversational Applications Using Text Completion Models + +```Python +Use the following context as your learned knowledge, inside XML tags. + + +{{#context#}} + + +When answering the user: +- If you don't know, just say that you don't know. +- If you are not sure, ask for clarification. +Avoid mentioning that you obtained the information from the context. +And answer according to the language of the user's question. + +{{pre_prompt}} + +Here are the chat histories between human and assistant, inside XML tags. + + +{{#histories#}} + + +Human: {{#query#}} + +Assistant: +``` + +**Template Structure:** + +* Context (`Context`) +* Pre-prompt (`Pre-prompt`) +* Conversation History (`History`) +* Query Variable (`Query`) + +### 4. Template for Building Text Generation Applications Using Text Completion Models + +```Python +Use the following context as your learned knowledge, inside XML tags. + + +{{#context#}} + + +When answering the user: +- If you don't know, just say that you don't know. +- If you are not sure, ask for clarification. +Avoid mentioning that you obtained the information from the context. +And answer according to the language of the user's question. + +{{pre_prompt}} +{{query}} +``` + +**Template Structure:** + +* Context (`Context`) +* Pre-prompt (`Pre-prompt`) +* Query Variable (`Query`) + + +Dify and some model vendors have jointly optimized the system prompts deeply. Therefore, the initial templates under some models may differ from the examples above. + + +### Parameter Descriptions + +* Context (`Context`): Used to insert relevant text from the dataset as the context of the complete prompt. +* Pre-prompt (`Pre-prompt`): In **Easy Mode**, the pre-prompt orchestrated will be inserted into the complete prompt. +* Conversation History (`History`): When building chat applications using text generation models, the system will insert the user's conversation history as context into the complete prompt. Since some models respond differently to role prefixes, you can also modify the role prefix in the conversation history settings, e.g., changing "Assistant" to "AI". +* Query (`Query`): The query content is the variable value used to insert the user's question in the chat. \ No newline at end of file diff --git a/en/learn-more/use-cases/README.mdx b/en/learn-more/use-cases/README.mdx new file mode 100644 index 00000000..c590ad53 --- /dev/null +++ b/en/learn-more/use-cases/README.mdx @@ -0,0 +1,5 @@ +--- +title: Use Cases +--- + + diff --git a/en/learn-more/use-cases/build-an-notion-ai-assistant.mdx b/en/learn-more/use-cases/build-an-notion-ai-assistant.mdx new file mode 100644 index 00000000..ab52ba17 --- /dev/null +++ b/en/learn-more/use-cases/build-an-notion-ai-assistant.mdx @@ -0,0 +1,167 @@ +--- +title: Build a Notion AI Assistant +--- + + +### Intro + +Notion is a powerful tool for managing knowledge. Its flexibility and extensibility make it an excellent personal knowledge library and shared workspace. Many people use it to store their knowledge and work in collaboration with others, facilitating the exchange of ideas and the creation of new knowledge. + +However, this knowledge remains static, as users must search for the information they need and read through it to find the answers they're seeking. This process is neither particularly efficient nor intelligent. + +Have you ever dreamed of having an AI assistant based on your Notion library? This assistant would not only assist you in reviewing your knowledge base, but also engage in the communication like a seasoned butler, even answering other people's questions as if you were the master of your personal Notion library. + +### How to Make Your Notion AI Assistant Come True? + +Now, you can make this dream come true through [Dify](https://dify.ai/). Dify is an open-source LLMOps (Large Language Models Ops) platform. + +Large Language Models like ChatGPT and Claude, have been using their impressive abilities to reshape the world. Their powerful learning aptitude primarily attributable to robust training data. Luckily, they've evolved to be sufficiently intelligent to learn from the content you provide, thus making the process of ideating from your personal Notion library, a reality. + +Without Dify, you might need to acquaint yourself with langchain, an abstraction that streamlines the process of assembling these pieces. + +### How to Use Dify to Build Your Personal Notion AI Assistant? + +The process to train a Notion AI assistant is relatively straightforward. Just follow these steps: + +1. Login to Dify. +2. Create a new datasets. +3. Connect with Notion and your datasets. +4. Start training. +5. Create your own AI application. + +#### 1. Login to dify + +Click [here](https://dify.ai/) to login to Dify. You can conveniently log in using your GitHub or Google account. + +> If you are using GitHub account to login, how about getting this [project](https://github.com/langgenius/dify) a star? It really help us a lot! + +#### 2. Create new knowledge base + +Click the `Knowledge` button on the top side bar, followed by the `Create Knowledge` button. + +![](https://assets-docs.dify.ai/2025/03/a5d9c40fb35b0f80e7de2b7418f6eedd.png) + +#### 3. Connect with Notion and Your Knowledge[​](https://wsyfin.com/notion-dify#3-connect-with-notion-and-datasets) + +Select "Sync from Notion" and then click the "Connect" button.. + +![](https://assets-docs.dify.ai/2025/03/f7a0f9cab9e93ea0e8874c001ffd2af3.png) + +Afterward, you'll be redirected to the Notion login page. Log in with your Notion account. + +![](https://assets-docs.dify.ai/2025/03/fd4714139bdcf1509d8a8ae2a3d7afd9.png) + +Check the permissions needed by Dify, and then click the "Select pages" button. + +![](https://assets-docs.dify.ai/2025/03/b4b7faedab6c232a5680322801e4466a.png) + +Select the pages you want to synchronize with Dify, and press the "Allow access" button. + +![](https://assets-docs.dify.ai/2025/03/fe00306264fa0334c6f96ba049460d08.png) + +#### 4. Start training[​](https://wsyfin.com/notion-dify#4-start-training) + +Specifying the pages for AI need to study, enabling it to comprehend the content within this section of Notion. Then click the "next" button. + +![](https://assets-docs.dify.ai/2025/03/4e86d66f5ce46b2043f0eb9165a73572.png) + +We suggest selecting the "Automatic" and "High Quality" options to train your AI assistant. Then click the "Save & Process" button. + +![](https://assets-docs.dify.ai/2025/03/21184bca8647509abc58c75ad3002582.png) + +Enjoy your coffee while waiting for the training process to complete. + +![](https://assets-docs.dify.ai/2025/03/73cddadbf3b34edda974f8f47165d988.png) + +#### 5. Create Your AI application[​](https://wsyfin.com/notion-dify#5-create-your-ai-application) + +You must create an AI application and link it with the knowledge you've recently created. + +Return to the dashboard, and click the "Create new APP" button. It's recommended to use the Chat App directly. + +![](https://assets-docs.dify.ai/2025/03/8e91ce29dd346b3edd0f26a261e27ad2.png) + +Select the "Prompt Eng." and link your notion datasets in the "context". + +![](https://assets-docs.dify.ai/2025/03/853c348bfc86a2f845a7698fe715f6f1.png) + +I recommend adding a 'Pre Prompt' to your AI application. Just like spells are essential to Harry Potter, similarly, certain tools or features can greatly enhance the ability of AI application. + +For example, if your Notion notes focus on problem-solving in software development, could write in one of the prompts: + +_I want you to act as an IT Expert in my Notion workspace, using your knowledge of computer science, network infrastructure, Notion notes, and IT security to solve the problems_. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/f5a5e95e1120906669b3c1ad4e186dea.png) + +It's recommended to initially enable the AI to actively furnish the users with a starter sentence, providing a clue as to what they can ask. Furthermore, activating the 'Speech to Text' feature can allow users to interact with your AI assistant using their voice. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/ffa36d0313dba8ac4255424c4f880191.png) + +Finally, Click the "Publish" button on the top right of the page. Now you can click the public URL in the "Monitoring" section to converse with your personalized AI assistant! + +![create-app-4](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/1f8c6f7d7f3d61a1928d25ab85f7b1ff.png) + +### Utilizing API to Integrate With Your Project + +Each AI application baked by Dify can be accessed via its API. This method allows developers to tap directly into the robust characteristics of large language models (LLMs) within frontend applications, delivering a true "Backend-as-a-Service" (BaaS) experience. + +With effortless API integration, you can conveniently invoke your Notion AI application without the need for intricate configurations. + +Click the "API Reference" button on the page of Overview page. You can refer to it as your App's API document. + +![using-api-1](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/4062ac40525d573f14a8059a6605688a.png) + +#### 1. Generate API Secret Key[​](https://wsyfin.com/notion-dify#1-generate-api-secret-key) + +For security reasons, it's recommended to create a new API secret key to access your AI application. + +![using-api-2](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/b2546aa50caff4316798edc0b950661d.png) + +#### 2. Retrieve Conversation ID[​](https://wsyfin.com/notion-dify#2-retrieve-conversation-id) + +After chatting with your AI application, you can retrieve the session ID from the "Logs & Ann." pages. + +![using-api-3](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/6feec1081814299c574b5fb4af3f2bb8.png) + +#### 3. Invoke API[​](https://wsyfin.com/notion-dify#3-invoke-api) + +You can run the example request code on the API document to invoke your AI application in terminal. + +Remember to replace `YOUR SECRET KEY` and `conversation_id` on your code. + +> You can input empty `conversation_id` at the first time, and replace it after you receive response contained `conversation_id`. + +``` +curl --location --request POST 'https://api.dify.ai/v1/chat-messages' \ +--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "inputs": {}, + "query": "eh", + "response_mode": "streaming", + "conversation_id": "", + "user": "abc-123" +}' +``` + +Sending request in terminal and you will get a successful response. + +![](https://assets-docs.dify.ai/2025/03/fb57e6a8fdb61fbc7b3c3d42f32d2ca5.png) + +If you want to continue this chat, go to replace the `conversation_id` of the request code to the `conversation_id` you get from the response. + +And you can check all the conversation history on the "Logs & Ann." page. + +![](https://assets-docs.dify.ai/2025/03/1b7644c76a65eb6b7ad4937ba681fcd3.png) + +### Sync with notion periodically[​](https://wsyfin.com/notion-dify#sync-with-notion-periodically) + +If your Notion's pages have updated, you can sync with Dify periodically to keep your AI assistant up-to-date. Your AI assistant will learn from the new content. + +![](https://assets-docs.dify.ai/2025/03/5a2b362a900401b85f1ba69414edf076.png) + +### Summary + +In this tutorial, we have learned not only how to import Your Notion data into Dify, but also know how to use the API to integrate it with your project. + +[Dify](https://dify.ai/) is a user-friendly LLMOps platform targeted to empower more individuals to create sustainable, AI-native applications. With visual orchestration designed for various application types, Dify offers ready-to-use applications that can assist you in utilizing data to craft your distinctive AI assistant. Do not hesitate to contact us if you have any inquiries. diff --git a/en/learn-more/use-cases/building-an-ai-thesis-slack-bot.mdx b/en/learn-more/use-cases/building-an-ai-thesis-slack-bot.mdx new file mode 100644 index 00000000..a048db86 --- /dev/null +++ b/en/learn-more/use-cases/building-an-ai-thesis-slack-bot.mdx @@ -0,0 +1,148 @@ +--- +title: Building an AI Thesis Slack Bot on Dify Cloud +--- + + +> Author:Alec Lee. 2025/03/11 + +## 1. Overview + +With the rapid growth of academic research in the information age, researchers require more efficient ways to access the latest findings. The AI Thesis Slack Bot streamlines this process by leveraging AI-driven automated workflows, enabling users to quickly retrieve arXiv paper summaries within Slack. + +This tool can be used in various real estate-related contexts, such as: + +* Research teams tracking the latest AI advancements in real estate technology +* Internal synchronization of information for AI research departments in real estate firms +* Academic collaborations between university faculty and students on real estate innovation + +This guide will walk you through setting up the AI Thesis Slack Bot, its core operating principles, and how to maximize its efficiency to enhance productivity in the real estate sector. + +## 2. Preparation + +### 2.1 Configuring the OpenAI API + +Set up OpenAI in your account’s model settings and install the API key. + +![API](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/001API.jpg) + +### 2.2 Installing ArXiv and Slack Plugins + +Install the ArXiv and Slack tools within the Dify platform. + + + +### 2.3 Creating a Slack Account + +Sign up for a free Slack account on the [official Slack website](https://slack.com/intl/en-gb/get-started?entry_point=help_center#/createnew). + +![Slack](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/003SlackAccount.jpg) + +## 3. Setting Up the AI Thesis Slack Bot Workflow + +The AI Thesis Slack Bot operates through the following automated process: + +**a. User Input:** The user enters a keyword (e.g., *"Large Language Model"*) in the Dify AI Thesis Slack Bot. +**b.Paper Retrieval:** The bot queries arXiv for relevant research papers, filtering for the most recent publications (e.g., papers published after January 1, 2024). + +**c.AI-Powered Summarization:** Using GPT-4o, the bot processes and summarizes the papers, then formats the summary for Slack in the following structure: + + 📄 **Title:** \[Paper Title\] + 👤 **Author(s):** \[Author Names\] + 📆 **Publication Date:** \[Date\] + 📌 **Summary:** \[Key takeaways from the paper\] + +**d.Automated Slack Push:** The bot automatically posts the summary to a designated Slack channel, ensuring that team members can quickly access the latest research updates—whether in a public channel or private messages. + +## 4. Implementation Steps + +### 4.1 Creating the Workflow + +a. On the Dify homepage, select Create from Blank, then choose Workflow and enter a name (e.g., *AI Thesis Slack Bot*). + +Let me know if you need further refinements\! 🚀 + +![Create from Blank](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/004Createfromblank.jpg) + +b. In the Tools section, select the ArXiv Search tool that has already been installed. + +![Tools ArXiv](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/005ToolsArXiv.jpg) + +c. In the Nodes section, choose LLM, and configure it to use the pre-set OpenAI model. + +![LLM](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/006LLM.jpg) + +d. In the Tools section, select the installed Slack Incoming Webhook, click Authorize, and add the Slack Webhook URL. + +![Slack Incoming Webhook](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/007Slackincomingwebhook.jpg) + +#### 4.2 Adding the Slack Webhook URL + +a. Go to the [Slack API Management Page](https://api.slack.com/apps) and click Create New App. + +![Slack API](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/008Slackapi.jpg) + +b. Select "From scratch", enter the app name (e.g., *AI Thesis Bot*), and choose the Slack channel where messages will be sent. + +![From Scratch](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/009Fromscratch.jpg) + +c. Navigate to Incoming Webhooks, enable Activate Incoming Webhooks, then click Add New Webhook to Workspace. Select the Slack channel, then copy the generated Webhook URL. + +![Incoming Webhooks Activate](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/010IncomingwebhooksActivate.jpg) + +d. Paste the Webhook URL into the Slack Webhook URL field in the Slack node. + +![Slack Webhook URL](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/011SlackWehookURL.jpg) + +e. After selecting End as the final node in the workflow, ensure that all workflow nodes are properly connected. Next, proceed to configure the parameters for each node. + +![End](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/012End.jpg) + +#### 4.3 Configuring Node Parameters + +a. Start Node: Set the keyword query parameters. + +![Start](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/013StartNode.jpg) + +b. ArXiv Search Node: Add the Query String content (adjustable based on requirements). + + + +c. LLM Node: Select the AI model, add CONTEXT, customize Prompt Engineering in the SYSTEM section (modifiable as needed), and set Context in the USER section. + +![LLM Context](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/015LLMcontext.jpg) + +d. Slack Node: In the Content field, select LLM/Text String. + +![Slack Content](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/016SlackEndP.jpg) + +### 4.4 Testing and Deployment + +a. Run a test before deployment to ensure the workflow functions correctly. Once verified, click Deploy. + +![Shiyunxing](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/018TestInupt.jpg) + +b. If the Dify search results match the Slack notifications, congratulations\! Your workflow is successfully running. 🎉 + +![Last P](https://raw.githubusercontent.com/aleclee1005/MyPic/refs/heads/img/019LastPTest.jpg) + +## 5. Future Optimization Directions + +Currently, the AI Thesis Slack Bot primarily focuses on ArXiv paper retrieval and summary delivery. Future improvements could include: + + ✅ Enhancing Summary Quality: Refining LLM prompts for greater accuracy and relevance. + ✅ Building a Searchable Archive: Creating a database to store historical research papers. + ✅ Expanding Data Sources: Supporting IEEE, Springer, ACL, and other academic repositories. + ✅ Personalized Recommendations: Suggesting relevant papers based on user interests. + ✅ Multi-Platform Support: Enabling compatibility with WhatsApp, Teams, WeChat, and more. + +## 6. Conclusion + +With the AI Thesis Slack Bot, you can automate academic information retrieval, improving research team productivity. If you're interested in further unlocking its potential, consider integrating Dify with a Realtime API to develop advanced applications, such as real-time paper discussions and AI-powered Q\&A, allowing AI to play a greater role in academic collaboration and AI-driven research. 🚀 diff --git a/en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.mdx b/en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.mdx new file mode 100644 index 00000000..29f98d3a --- /dev/null +++ b/en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.mdx @@ -0,0 +1,59 @@ +--- +title: Create a MidJourney Prompt Bot with Dify +--- + +via [@op7418](https://twitter.com/op7418) on Twitter + +I recently tried out a natural language programming tool called Dify, developed by [@goocarlos](https://twitter.com/goocarlos). It allows someone without coding knowledge to create a web application just by writing prompts. It even generates the API for you, making it easy to deploy your application on your preferred platform. + +The application I created using Dify took me only 20 minutes, and the results were impressive. Without Dify, it might have taken me much longer to achieve the same outcome. The specific functionality of the application is to generate Midjourney prompts based on short input topics, assisting users in quickly filling in common Midjourney commands. In this tutorial, I will walk you through the process of creating this application to familiarize you with the platform. + +Dify offers two types of applications: conversational applications similar to ChatGPT, which involve multi-turn dialogue, and text generation applications that directly generate text content with the click of a button. Since we want to create a Midjoureny prompt bot, we'll choose the text generator. + +You can access Dify here: https://dify.ai/ + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/b9e69dc45562f699ef2d6e5f8d71e7ce.png) + +Once you've created your application, the dashboard page will display some data monitoring and application settings. Click on "Prompt Engineering" on the left, which is the main working page. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/936c786960ac4bf7936227c58b4754a4.png) + +On this page, the left side is for prompt settings and other functions, while the right side provides real-time previews and usage of your created content. The prefix prompts are the triggers that the user inputs after each content, and they instruct the GPT model how to process the user's input information. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/488879620e712ce535106fd58d529bb7.jpeg) + +Take a look at my prefix prompt structure: the first part instructs GPT to output a description of a photo in the following structure. The second structure serves as the template for generating the prompt, mainly consisting of elements like 'Color photo of the theme,' 'Intricate patterns,' 'Stark contrasts,' 'Environmental description,' 'Camera model,' 'Lens focal length description related to the input content,' 'Composition description relative to the input content,' and 'The names of four master photographers.' This constitutes the main content of the prompt. In theory, you can now save this to the preview area on the right, input the theme you want to generate, and the corresponding prompt will be generated. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/9c03412aa11d3542940e3bdcc0a85a4e.png) + +You may have noticed the `{{proportion}}` and `{{version}}` at the end. These are variables used to pass user-selected information. On the right side, users are required to choose image proportions and model versions, and these two variables help carry that information to the end of the prompt. Let's see how to set them up. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/ca81600060ffdfb7ec2a6e3c72bd86d7.png) + +Our goal is to fill in the user's selected information at the end of the prompt, making it easy for users to copy without having to rewrite or memorize these commands. For this, we use the variable function. + +Variables allow us to dynamically incorporate the user's form-filled or selected content into the prompt. For example, I've created two variables: one represents the image proportion, and the other represents the model version. Click the "Add" button to create the variables. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/86d0eb832fd8457dc3cb2e5656f96425.jpeg) + +After creation, you'll need to fill in the variable key and field name. The variable key should be in English. The optional setting means the field will be non-mandatory when the user fills it. Next, click "Settings" in the action bar to set the variable content. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/ac8ab15304f9f1ff5fa67e78726e101c.jpeg) + +Variables can be of two types: text variables, where users manually input content, and select options where users select from given choices. Since we want to avoid manual commands, we'll choose the dropdown option and add the required choices. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/6a906ca7145c8c32bf9a72fc02b17c0f.png) + +Now, let's use the variables. We need to enclose the variable key within double curly brackets `{}` and add it to the prefix prompt. Since we want the GPT to output the user-selected content as is, we'll include the phrase "Producing the following English photo description based on user input" in the prompt. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/dd064aee30a248ce62f4fb197096c38d.jpeg) + +However, there's still a chance that GPT might modify our variable content. To address this, we can lower the diversity in the model selection on the right, reducing the temperature and making it less likely to alter our variable content. You can check the tooltips for other parameters' meanings. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/ea875bf33f674a39318e26b67e561e61.png) + +With these steps, your application is now complete. After testing and ensuring there are no issues with the output, click the "Publish" button in the upper right corner to release your application. You and users can access your application through the publicly available URL. You can also customize the application name, introduction, icon, and other details in the settings. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/3f997d26023f1ee9bc8604eb2eb1cb6e.png) + +That's how you create a simple AI application using Dify. You can also deploy your application on other platforms or modify its UI using the generated API. Additionally, Dify supports uploading your own data, such as building a customer service bot to assist with product-related queries. This concludes the tutorial, and a special thanks to @goocarlos for creating such a fantastic product. diff --git a/en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.mdx b/en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.mdx new file mode 100644 index 00000000..4fafe15d --- /dev/null +++ b/en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.mdx @@ -0,0 +1,64 @@ +--- +title: Create an AI Chatbot with Business Data in Minutes +--- + + +AI-powered customer service may be a standard feature for every business website, and it is becoming easier to implement with higher levels of customization. The following content will guide you on how to create an AI-powered customer service for your website in just a few minutes using Dify. + +### Prerequisite + +**Register or Deploy Dify.AI** + +Dify is an open source product which you can find on[ GitHub](https://github.com/langgenius/dify) and deploy it to your local or company intranet. Meanwhile, it provides a cloud SaaS version, access [Didy.AI ](https://dify.ai/)to register and use it. + +**Apply for API key from OpenAI and other model providers.** + +Dify provides free message call usage quotas for OpenAI GPT series (200 times) and Antropic Claude (1000 times) AI models, which require tokens to be consumed. Before you run out, you need to apply for your own API key through the official channel of the model provider. You can enter the key in Dify's "Settings" - "Model Provider". + +### Upload your product documentation or knowledge base. + +If you want to build an AI Chatbot based on the company's existing knowledge base and product documents, then you need to upload as many product-related documents as possible to Dify's knowledge. Dify helps you **complete segmentation and cleaning of the data.** The Dify knowledge supports two indexing modes: high quality and economical. We recommend using the high quality mode, which consumes tokens but provides higher accuracy. + +1. Create a new knowledge base +2. upload your business data (support batch uploading multiple texts) +3. select the cleaning method +4. Click \[Save and Process], and it will take only a few seconds to complete the processing. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/3ba45ef5b8859d80cded096caf386c1b.png) + +### Create an AI application and give it instructions + +Create a conversational app on the \[Build App] page. Then start setting up the prompt and its front-end user experience interactions. + +1. Give the AI instruction: Click on the "Pre Prompt" on the left to edit your Prompt, so that it can play the role of customer service and communicate with users. You can specify its tone, style, and limit it to answer or not answer certain questions. +2. Let AI possess your business knowledge: add the target knowledge you just uploaded in the \[context]. +3. Set up the opening remarks: click "Add Feature" to turn on the feature. The purpose is to add an opening line for AI applications, so that when the user opens the customer service window, it will greet the user first and increase affinity. +4. Set up the "Next Question Suggestion": turn on this feature to "Add Feature". The purpose is to give users a direction for their next question after they have asked one. +5. Choose a suitable model and adjust the parameters: different models can be selected in the upper right corner of the page. The performance and token price consumed by different models are different. In this example, we use the GPT3.5 model. + +In this case, we assign a role to the AI: + +> Pre prompt:You are Bob, the AI customer service for Dify, specializing in answering questions about Dify's products, team, or LLMOps for users.Please note, refuse to answer when users ask "inappropriate questions", i.e., content beyond the scope of this document. + +> Opening remarks:Hey `{{User_name}}`, I'm Bob, the first AI member of Dify. You can discuss with me any questions related to Dify products, team, and even LLMOps. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/1175a854034f6814b8066852d63bcdf8.png) + +### Debug the performance of AI Chatbot and publish. + +After completing the setup, you can send messages to it on the right side of the current page to debug whether its performance meets expectations. Then click "Publish". And then you get an AI chatbot. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/6e917021e11803171d7801af948ffc4c.png) + +### Embed AI Chatbot application into your front-end page. + +This step is to embed the prepared AI chatbot into your official website . Click \[Overview] -> \[Embedded], select the script tag method, and copy the script code into the \ or \ tag of your website. If you are not a technical person, you can ask the developer responsible for the official website to paste and update the page. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/8945e4cbb2ffbd9c5e2272fd99768be3.png) + +1. Paste the copied code into the target location on your website. +2. Update your official website and you can get an AI intelligent customer service with your business data. Try it out to see the effect. + +Above is an example of how to embed Dify into the official website through the AI chatbot Bob of Dify official website. Of course, you can also use more features provided by Dify to enhance the performance of the chatbot, such as adding some variable settings, so that users can fill in necessary judgment information before interaction, such as name, specific product used and so on. + +Welcome to explore in Dify together! diff --git a/en/learn-more/use-cases/dify-model-arena.mdx b/en/learn-more/use-cases/dify-model-arena.mdx new file mode 100644 index 00000000..4e25a65a --- /dev/null +++ b/en/learn-more/use-cases/dify-model-arena.mdx @@ -0,0 +1,41 @@ +--- +title: How to Experience a “Model Arena” in Dify? DeepSeek R1 vs o1 +--- + +## Overview + +Dify’s **[“Multiple Model Debugging”](/en/guides/application-orchestrate/multiple-llms-debugging.md)** feature in Chatbot applications allows you to observe how different large language models respond to the same question. This guide uses the example of **DeepSeek R1 vs o1** to demonstrate how to intuitively compare various models’ responses within Dify. + +![](https://assets-docs.dify.ai/2025/02/dd2a54e05cf5bfa252ac980ec478e3d5.png) + +## Prerequisites + +- Dify.AI (Cloud or Community Edition) +- DeepSeek R1 API +- OpenAI o1 API + +## Quick Start + +### 1. Configure LLM API Keys + +Before testing, click **“Profile → Model Provider”** (top-right corner) and follow the prompts to manually add the API keys for multiple models. For more details, please check [here](https://docs.dify.ai/guides/model-configuration) + +### 2. Create an Application + +Create a **Chatbot** application, specifying a name and description to complete the setup. + +![](https://assets-docs.dify.ai/2025/02/7246807cbd0776564b76e1ef37dcbd4d.png) + +### 3. Select the Models + +Click the **Model** selection button in the top-right corner of the application screen. Choose the `o1` model, then select **Debug as Multiple Models”** and add the `deepseek-reasoner` model. + +![](https://assets-docs.dify.ai/2025/02/61d8ba00a8a89052ac7a5a9d8fb54f58.png) + +### 4. Compare Results + +Enter a question in the chat window. You can now view the responses from different models side by side for the same prompt. + +![](https://assets-docs.dify.ai/2025/02/03ac1c1da6705d76b01f5867a1e24e32.gif) + +For more information, or if you encounter any issues, see [Multiple Model Debugging](/en/guides/application-orchestrate/multiple-llms-debugging.md). diff --git a/en/learn-more/use-cases/dify-schedule.mdx b/en/learn-more/use-cases/dify-schedule.mdx new file mode 100644 index 00000000..1c349c4b --- /dev/null +++ b/en/learn-more/use-cases/dify-schedule.mdx @@ -0,0 +1,127 @@ +--- +title: Building the Dify Scheduler +--- + + +> Author: [Leo\_chen](https://github.com/leochen-g), creator of [Dify Schedule](https://github.com/leochen-g/dify-schedule) and [Smart WeChat Assistant](https://github.com/leochen-g/wechat-assistant-pro) + +## Overview + +Tired of manually running Dify Workflows? Missing scheduled task support? With **Dify Schedule Assistant**, you can easily add scheduling capabilities to Dify Workflows. Using GitHub Actions, you can set up automated task execution with real-time notifications to optimize your workflow efficiency. + +> Note: This tool only supports Dify Workflow applications + +## 🌟 Core Features + +* 🔄 Parallel execution of multiple Workflows +* ⏰ Flexible scheduling (Default: UTC+8 06:30) +* 📱 Multi-channel notifications + * Enterprise: WeCom, DingTalk, Feishu + * Personal: WeChat, Email, Server Chan, Pushplus +* 🔒 Secure execution via GitHub Actions +* 🐲 Support for QingLong Panel deployment + +## 🚀 Quick Start + +Two deployment options available: + +1. **Online (GitHub Actions)** +2. **Local (QingLong Panel)** + +### Option 1: GitHub Actions + +1. **Fork Repository** Visit [Dify Schedule Repository](https://github.com/leochen-g/dify-schedule) and fork it. +2. **Configure Secrets** Go to **Settings -> Secrets -> New repository secret** and add: + + | Secret Name | Content | Required | + | --------------- | ---------------------------------------------- | -------- | + | `DIFY_BASE_URL` | Dify API URL (Default: https://api.dify.ai/v1) | No | + | `DIFY_TOKENS` | Dify Workflow API keys (separate with `;`) | Yes | + | `DIFY_INPUTS` | Workflow variables (JSON format) | No | + + **Notification Settings (Optional)** + + | Secret Name | Content | Purpose | + | ------------------------ | ------------------------------------------------------------------------------------- | ------------ | + | `EMAIL_USER` | Sender email (SMTP enabled) | Email | + | `EMAIL_PASS` | SMTP password | Email | + | `EMAIL_TO` | Recipient emails (separate with `,`) | Email | + | `PUSHPLUS_TOKEN` | [Pushplus](http://www.pushplus.plus/) token | WeChat | + | `SERVERPUSHKEY` | [Server Chan](https://sct.ftqq.com/) key | WeChat | + | `DINGDING_WEBHOOK` | DingTalk bot webhook | DingTalk | + | `WEIXIN_WEBHOOK` | WeCom bot webhook | WeCom | + | `FEISHU_WEBHOOK` | Feishu bot webhook | Feishu | + | `AIBOTK_KEY` | [Smart WeChat Assistant](https://wechat.aibotk.com/?r=dBL0Bn\&f=difySchedule) API Key | WeChat | + | `AIBOTK_ROOM_RECIVER` | WeChat group name | Group chat | + | `AIBOTK_CONTACT_RECIVER` | WeChat contact nickname | Private chat | +3. **Enable Workflow** Go to **Actions** tab and enable workflows. + +### Option 2: Local Deployment + +> QingLong Panel is an open-source task scheduler. [Project Link](https://github.com/whyour/qinglong) + +1. **Install QingLong Panel** Follow instructions at [project page](https://github.com/whyour/qinglong). +2. **Add Subscription** Run: + + ```bash + ql repo https://github.com/leochen-g/dify-schedule.git "ql_" "utils" "sdk" + ``` +3. **Install Dependencies** + * Go to【Dependencies】->【NodeJS】 + * Install `axios` +4. **Configure Environment Variables** + * `DIFY_TOKENS`: Workflow API keys (Required) + * `DIFY_BASE_URL`: API URL (Optional) + * Separate multiple tokens with `;` +5. **Notifications** + * Use QingLong's built-in notification system + +## 📸 Notification Preview + + + + + + + + + + + + + + +
WeChat Notification ExampleEmail Notification Example
!WeChat!Email
+## ❓ Troubleshooting Guide + +### Getting API Keys + +1. Login to Dify console +2. Access target Workflow +3. Visit API Reference page +4. Get API key + +![](https://assets-docs.dify.ai/2025/01/f7239b198b4aeac98d209bfcebae153d.png) + +## Common Issues + +1. **Connection Issues** + * Ensure private Dify instances have internet access + * Verify network and firewall settings +2. **Execution Errors** + * Verify application type is Workflow + * Check `DIFY_INPUTS` JSON format + * Review logs for missing variables + +Report other issues on GitHub (remove sensitive information). + +## 🤝 Contributing + +Welcome community contributions: + +* Feature suggestions +* Bug fixes +* Documentation improvements +* New features + +Participate via Pull Requests or Issues. diff --git a/en/learn-more/use-cases/how-to-connect-aws-bedrock.mdx b/en/learn-more/use-cases/how-to-connect-aws-bedrock.mdx new file mode 100644 index 00000000..ef9b5e27 --- /dev/null +++ b/en/learn-more/use-cases/how-to-connect-aws-bedrock.mdx @@ -0,0 +1,170 @@ +--- +title: How to connect with AWS Bedrock Knowledge Base? +--- + + +This article will briefly introduce how to connect the Dify platform with the AWS Bedrock knowledge base through the [external knowledge base API](https://docs.dify.ai/guides/knowledge-base/external-knowledge-api-documentation), so that AI applications in the Dify platform can directly obtain content stored in the AWS Bedrock knowledge base and expand new information source channels. + +### Pre-preparation + +* AWS Bedrock Knowledge Base +* Dify SaaS Service / Dify Community Version +* Backend API Development Basics + +### 1. Register and Create AWS Bedrock Knowledge Base + +Visit [AWS Bedrock](https://aws.amazon.com/bedrock/) and create the Knowledge Base service. + +![Create AWS Bedrock Knowledge Base](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/1bf24647a5ffeba4accecd1052011980.png) + +### 2. Build the Backend API Service + +The Dify platform cannot directly connect to AWS Bedrock Knowledge Base. The developer needs to refer to Dify's [API definition](../../guides/knowledge-base/external-knowledge-api-documentation.md) on external knowledge base connection, manually create the backend API service, and establish a connection with AWS Bedrock. Please refer to the specific architecture diagram: + +![Build the backend API service](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/c57ce0a77ee668268f53e91497bd5c2b.png) + +You can refer to the following 2 demo code. + +`knowledge.py` + +```python +from flask import request +from flask_restful import Resource, reqparse + +from bedrock.knowledge_service import ExternalDatasetService + + +class BedrockRetrievalApi(Resource): + # url : /retrieval + def post(self): + parser = reqparse.RequestParser() + parser.add_argument("retrieval_setting", nullable=False, required=True, type=dict, location="json") + parser.add_argument("query", nullable=False, required=True, type=str,) + parser.add_argument("knowledge_id", nullable=False, required=True, type=str) + args = parser.parse_args() + + # Authorization check + auth_header = request.headers.get("Authorization") + if " " not in auth_header: + return { + "error_code": 1001, + "error_msg": "Invalid Authorization header format. Expected 'Bearer ' format." + }, 403 + auth_scheme, auth_token = auth_header.split(None, 1) + auth_scheme = auth_scheme.lower() + if auth_scheme != "bearer": + return { + "error_code": 1001, + "error_msg": "Invalid Authorization header format. Expected 'Bearer ' format." + }, 403 + if auth_token: + # process your authorization logic here + pass + + # Call the knowledge retrieval service + result = ExternalDatasetService.knowledge_retrieval( + args["retrieval_setting"], args["query"], args["knowledge_id"] + ) + return result, 200 +``` + +`knowledge_service.py` + +```python +import boto3 + + +class ExternalDatasetService: + @staticmethod + def knowledge_retrieval(retrieval_setting: dict, query: str, knowledge_id: str): + # get bedrock client + client = boto3.client( + "bedrock-agent-runtime", + aws_secret_access_key="AWS_SECRET_ACCESS_KEY", + aws_access_key_id="AWS_ACCESS_KEY_ID", + # example: us-east-1 + region_name="AWS_REGION_NAME", + ) + # fetch external knowledge retrieval + response = client.retrieve( + knowledgeBaseId=knowledge_id, + retrievalConfiguration={ + "vectorSearchConfiguration": {"numberOfResults": retrieval_setting.get("top_k"), "overrideSearchType": "HYBRID"} + }, + retrievalQuery={"text": query}, + ) + # parse response + results = [] + if response.get("ResponseMetadata") and response.get("ResponseMetadata").get("HTTPStatusCode") == 200: + if response.get("retrievalResults"): + retrieval_results = response.get("retrievalResults") + for retrieval_result in retrieval_results: + # filter out results with score less than threshold + if retrieval_result.get("score") < retrieval_setting.get("score_threshold", .0): + continue + result = { + "metadata": retrieval_result.get("metadata"), + "score": retrieval_result.get("score"), + "title": retrieval_result.get("metadata").get("x-amz-bedrock-kb-source-uri"), + "content": retrieval_result.get("content").get("text"), + } + results.append(result) + return { + "records": results + } +``` + +During the process, you can construct the API endpoint address and the API Key for authentication and use them for the next connections. + +### 3. Get the AWS Bedrock Knowledge Base ID + +After log in to the AWS Bedrock Knowledge backend and get the ID of the created Knowledge Base, you can use this parameter to connect to the Dify platform in the subsequent steps. + +![Get the AWS Bedrock Knowledge Base ID](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/03fd7e383daf419765bbfd05c3742923.png) + +### 4. Associate the External Knowledge API + +Go to the **"Knowledge"** page in the Dify platform, click **"External Knowledge API"** in the upper right corner, and tap **"Add an External Knowledge API"**. + +Follow the prompts on the page and fill in the following information: + +* The name of the knowledge base. Custom names are allowed to distinguish different external knowledge APIs connected to the Dify platform; +* API endpoint address, the connection address of the external knowledge base, which can be customized in [Step 2](how-to-connect-aws-bedrock.md#id-2.build-the-backend-api-service). Example: `api-endpoint/retrieval`; +* API Key, the external knowledge base connection key, which can be customized in [Step 2](how-to-connect-aws-bedrock.md#id-2.build-the-backend-api-service). + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/4f2ce69f8c59e788beb91303c68db571.png) + +### 5. Connect to External Knowledge Base + +Go to the **“Knowledge** page, click **“Connect to an External Knowledge Base”** below the add knowledge base card to jump to the parameter configuration page. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/c3c30c7ed923ee1d7986e92bb35186c6.png) + +Fill in the following parameters: + +* **Knowledge base name and description** +* **External knowledge base API** + +Select the external knowledge base API associated in Step 4. +* **External knowledge base ID** + +Fill in the AWS Bedrock knowledge base ID obtained in Step 3. +* **Adjust recall settings** + +**Top K:** When a user asks a question, the external knowledge API will be requested to obtain the most relevant content chunks. This parameter is used to filter text chunks with high similarity to user questions. The default value is 3. The higher the value, the more relevant text chunks will be recalled. + +**Score threshold:** The similarity threshold for text chunk filtering. Only text chunks with a score exceeding the set score will be recalled. The default value is 0.5. The higher the value, the higher the similarity required between the text and the question, the smaller the number of texts expected to be recalled, and the more accurate the result will be. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/58c324c08c1c5fc23ab31e04967ae449.png) + +After the settings are completed, you can establish a connection with the external knowledge base API. + +### 6. Test External Knowledge Base Connection and Retrieval + +After establishing a connection with an external knowledge base, developers can simulate possible user's question keywords in **"Retrieval Test"** and preview the text chunks retrieval from the AWS Bedrock Knowledge Base. + +![Test the connection and retrieval of the external knowledge base](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/fbc64f060f9fda83847428439b07c247.png) + +If you are not satisfied with the retrieval results, you can try to modify the retrieval parameters or adjust the retrieval settings of AWS Bedrock Knowledge Base. + +![Adjust the text chunking parameters of AWS Bedrock Knowledge Base](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/0984a89e4cf2a1150ec3160da14c656e.png) diff --git a/en/learn-more/use-cases/how-to-creat-dify-schedule.mdx b/en/learn-more/use-cases/how-to-creat-dify-schedule.mdx new file mode 100644 index 00000000..0e3f0ed4 --- /dev/null +++ b/en/learn-more/use-cases/how-to-creat-dify-schedule.mdx @@ -0,0 +1,101 @@ +--- +title: ⏰ Dify Schedule Helper +--- + + +> Author:[Leo_chen](https://github.com/leochen-g) + +> [X(Twitter)](https://x.com/leochen_code) + +Tired of manually running your Dify workflows? Let's add some automation magic! + +✨ The Dify Schedule Helper brings you the power of scheduled tasks - a feature not available in the official Dify platform. Using GitHub Actions, you can now schedule your workflows to run automatically and get instant notifications about their execution. + +## 🎯 What Can It Do? + +- 🔄 Automatically execute multiple Dify workflows on schedule +- ⏰ Run tasks at your preferred time (default: 06:30 Beijing Time) +- 📱 Send notifications through various channels +- 🆓 Completely free to use +- 🔒 Secure and reliable with GitHub Actions + +## 🚀 Getting Started + +You can use this automation in two ways: +- 🌐 Quick Start (Cloud-based) + +### 🌐 Quick Start + +Let's get your automated workflows up and running in just a few steps! + +1. 🍴 First, [Fork the repository](https://github.com/leochen-g/dify-schedule) + +2. ⚙️ Set up your secrets: + Navigate to: Repository Settings -> Secrets -> New repository secret + + | Secret Name | What to Put | Required? | + |------------|-------------|-----------| + | DIFY_BASE_URL | Your Dify API URL (default: https://api.dify.ai/v1) | No | + | DIFY_TOKENS | Your Dify workflow API keys (use `;` to separate multiple keys) | Yes | + | DIFY_INPUTS | JSON format workflow variables (if required by your Dify setup) | No | + + ### 📱 Notification Settings (Optional but recommended!) + + | Secret Name | What to Put | For | + |------------|-------------|-----| + | EMAIL_USER | SMTP-enabled email address | Email notifications | + | EMAIL_PASS | SMTP password | Email notifications | + | EMAIL_TO | Recipient email(s) (use `, ` for multiple) | Email notifications | + | PUSHPLUS_TOKEN | [Pushplus](http://www.pushplus.plus/) token | WeChat notifications | + | SERVERPUSHKEY | [Server push](https://sct.ftqq.com/) key | WeChat notifications | + | DINGDING_WEBHOOK | DingTalk robot webhook | DingTalk notifications | + | WEIXIN_WEBHOOK | WeCom robot webhook | WeCom notifications | + | FEISHU_WEBHOOK | Feishu robot webhook | Feishu notifications | + | AIBOTK_KEY | [Wechat Assistant](https://wechat.aibotk.com?r=dBL0Bn&f=difySchedule) apikey | WeChat notifications | + | AIBOTK_ROOM_RECIVER | Group chat name for Wechat Assistant | WeChat group notifications | + | AIBOTK_CONTACT_RECIVER | Contact name for Wechat Assistant | WeChat private notifications | + +3. ▶️ Enable the workflow: + Go to Actions tab and enable the workflows + +## 📸 Preview + +| WeChat Notification | Email Notification | +|:------------------:|:------------------:| +|
+ + +
+ +### 🚫 Connection Issues? + +If you're using a self-hosted Dify instance, make sure it's accessible from the public internet - GitHub Actions needs to reach your server! + +### ❌ Execution Errors? + +1. 🔍 Check if your application is a workflow application +2. ⚙️ If your workflow requires input variables, configure `DIFY_INPUTS` with valid JSON +3. 📝 Read the error logs carefully and ensure all required variables are set correctly + +Need more help? Feel free to open an issue with your logs (remember to remove sensitive information)! \ No newline at end of file diff --git a/en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.mdx b/en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.mdx new file mode 100644 index 00000000..b7ed1cb6 --- /dev/null +++ b/en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.mdx @@ -0,0 +1,93 @@ +--- +title: Integrating Dify Chatbot into Your Wix Website +--- + + +Wix, a popular website creation platform, allows users to visually design their websites through drag-and-drop functionality. By leveraging Wix's iframe code feature, you can seamlessly integrate a Dify chatbot into your Wix site. + +This functionality extends beyond chatbot integration, enabling you to display content from external servers and other sources within your Wix pages. Examples include weather widgets, stock tickers, calendars, or any custom web elements. + +This guide will walk you through the process of embedding a Dify chatbot into your Wix website using iframe code. The same method can be applied to integrate Dify applications into other websites, blogs, or web pages. + +## 1. Obtaining the Dify Application iFrame Code Snippet + +Assuming you've already created a [Dify AI application](https://docs.dify.ai/guides/application-orchestrate/creating-an-application), follow these steps to acquire the iFrame code snippet: + +1. Log into your Dify account +2. Select the Dify application you wish to embed +3. Click the "Publish" button in the upper right corner +4. On the publish page, choose the "Embed Into Site" option + + ![Embed Into Site Option](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/238daf21d6632d7d20cc9a6b882194f9.png) +5. Select an appropriate style and copy the displayed iFrame code. For example: + + ![iFrame Code Example](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/ebab9e1f734d80e793db3675a46099fb.png) + +## 2. Embedding the iFrame Code Snippet in Your Wix Site + +1. Log into your Wix website and open the page you want to edit +2. Click the blue `+` (Add Elements) button on the left side of the page +3. Select **Embed Code**, then click **Embed HTML** to add an HTML iFrame element to the page + + ![Add HTML iFrame](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/43b79d103da8beb69619e8f122609b89.png) +4. In the `HTML Settings` box, select the `Code` option +5. Paste the iFrame code snippet you obtained from your Dify application +6. Click the **Update** button to save and preview your changes + +Here's an example of an iFrame code snippet for embedding a Dify Chatbot: + +```bash + +``` + +![Insert Dify iFrame Code](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/learn-more/use-cases/fb257be087645aaa5eb2ae01a4a15d8d.png) + +> ⚠️ Ensure the address in the iFrame code begins with HTTPS. HTTP addresses will not display correctly. + +## 3. Customizing Your Dify Chatbot + +You can adjust the Dify Chatbot's button style, position, and other settings. + +### 3.1 Customizing Style + +Modify the `style` attribute in the iFrame code to customize the Chatbot button's appearance. For example: + +```bash + + +# Add a 2-pixel wide solid black border: border: 2px solid #000 + +→ + + +``` + +This code adds a 2-pixel wide solid black border to the chatbot interface. + +### 3.2 Customizing Position + +Adjust the button's position by modifying the `position` value in the `style` attribute. For example: + +```bash + + +# Fix the Chatbot to the bottom right corner of the webpage, 20 pixels from the bottom and right edges. + +→ + + +``` + +This code fixes the Chatbot to the bottom right corner of the webpage, 20 pixels from the bottom and right edges. + +## FAQ + +**1. iFrame Content Not Displaying** + +* Verify that the URL starts with HTTPS +* Check for typos in the `iframe` code +* Verify the embedded content complies with Wix's security policies + +**2. iFrame Content is Cropped** + +Modify the `width` and `height` percentage values in the `iframe` code to resolve content truncation issues. diff --git a/en/learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.mdx b/en/learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.mdx new file mode 100644 index 00000000..9e820cf9 --- /dev/null +++ b/en/learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.mdx @@ -0,0 +1,103 @@ +--- +title: DeepSeek & Dify Integration Guide Building AI Applications with Multi-Turn Reasoning +--- + + +## Overview + +As an open-source generative AI application development platform, **Dify** empowers developers to build smarter AI applications leveraging DeepSeek LLMs. The Dify platform delivers these key development experiences: + +* **Visual Development** - Create DeepSeek R1-powered AI applications in 3 minutes through intuitive visual orchestration +* **Knowledge Base Augmentation** - Activate RAG capabilities by connecting internal documents to build precision Q\&A systems +* **Workflow Expansion** - Implement complex business logic via drag-and-drop functional nodes and third-party tool plugins +* **Data Insights** – Comes with built-in metrics on total conversations, user engagement, and more, and supports integration with specialized monitoring platforms. ... + +This guide details DeepSeek API integration with Dify to achieve two core implementations: + +* **Intelligent Chatbot Development** - Directly harness DeepSeek R1's chain-of-thought reasoning capabilities +* **Knowledge-Enhanced Application Construction** - Enable accurate information retrieval and generation through private knowledge bases + +> For compliance-sensitive industries like finance and legal, Dify offers [**Private Deployment of DeepSeek + Dify: Build Your Own AI Assistant**](./private-ai-ollama-deepseek-dify.md): +> +> * Synchronized deployment of DeepSeek models and Dify platform in private networks +> * Full data sovereignty assurance + +The Dify × DeepSeek integration enables developers to bypass infrastructure complexities and directly advance to **scenario-based AI implementation**, accelerating the transformation of LLM technology into operational productivity. + +*** + +## Prerequisites + +### 1. Obtain DeepSeek API Key + +Visit the [DeepSeek API Platform](https://platform.deepseek.com/) and follow the instructions to request an API Key. + +> If the link is inaccessible, consider deploying DeepSeek locally. See the [local deployment guide](./private-ai-ollama-deepseek-dify.md) for more details. + +### 2. Register on Dify + +Dify is a platform that helps you quickly build generative AI applications. By integrating DeepSeek’s API, you can easily create a functional DeepSeek-powered AI app. + +*** + +## Integration Steps + +### 1. Connect DeepSeek to Dify + +Go to the Dify platform and navigate to **Profile → Settings → Model Providers**. Locate DeepSeek, paste the API Key obtained earlier, and click **Save**. Once validated, you will see a success message. + +![](https://assets-docs.dify.ai/2025/01/a7d6b4e05a3c9d85d0cb42f4dd018bc8.png) + +*** + +### 2. Create a DeepSeek AI Application + +1. On the Dify homepage, click **Create Blank App** on the left sidebar and select **Chatbot**. Give it a simple name. + +![](https://assets-docs.dify.ai/2025/01/7f56bc3c836c7248043b656fa95e474e.png) + +2. Choose the `deepseek-reasoner` model. + +> The deepseek-reasoner model is also known as the deepseek-r1 model. + +![](https://assets-docs.dify.ai/2025/01/de134c6285985fe1552223eb33641b9f.png) + +Once configured, you can start interacting with the chatbot. + +![](https://assets-docs.dify.ai/2025/01/3760e9a0cb7c2070978134d8f7f13929.png) + +*** + +### 3. Enable Text Analysis with Knowledge Base + +[Retrieval-Augmented Generation (RAG)](https://docs.dify.ai/zh-hans/learn-more/extended-reading/retrieval-augment) is an advanced technique that enhances AI responses by retrieving relevant knowledge. By providing the model with necessary contextual information, it improves response accuracy and relevance. When you upload internal documents or domain-specific materials, the AI can generate more informed answers based on this knowledge. + +#### 3.1 Create a Knowledge Base + +Upload documents containing information you want the AI to analyze. To ensure DeepSeek accurately understands document content, it is recommended to use the **Parent-Child Segmentation** mode. This preserves document hierarchy and context. See [Create a Knowledge Base](https://docs.dify.ai/zh-hans/guides/knowledge-base/create-knowledge-and-upload-documents) for detailed steps. + +![](https://assets-docs.dify.ai/2025/01/f38af53d2b124391e2ea32f29da7d87d.png) + +#### 3.2 Integrate the Knowledge Base into the AI App + +In the AI app's **Context** settings, add the knowledge base. When users ask questions, the LLM will first retrieve relevant information from the knowledge base before generating a response. + +![](https://assets-docs.dify.ai/2025/01/4254ec131fece172a59304414a060f4e.png) + +*** + +### 4. Share the AI Application + +Once built, you can share the AI application with others or integrate it into other websites. + +![](https://assets-docs.dify.ai/2025/01/d32857964683b48027d20d029e7e06c0.png) + +*** + +## Further Reading + +Beyond simple chatbot applications, you can also use Chatflow or Workflow to build more complex AI solutions with capabilities like document recognition, image processing, and speech recognition. See the following resources for more details: + +* [Workflow](https://docs.dify.ai/zh-hans/guides/workflow) +* [File Upload](https://docs.dify.ai/zh-hans/guides/workflow/file-upload) +* [Deploy DeepSeek + Dify Locally to Build a Private AI Assistant](./private-ai-ollama-deepseek-dify.md) diff --git a/en/learn-more/use-cases/private-ai-ollama-deepseek-dify.mdx b/en/learn-more/use-cases/private-ai-ollama-deepseek-dify.mdx new file mode 100644 index 00000000..4599ca0d --- /dev/null +++ b/en/learn-more/use-cases/private-ai-ollama-deepseek-dify.mdx @@ -0,0 +1,194 @@ +--- +title: Private Deployment of Ollama + DeepSeek + Dify Build Your Own AI Assistant +--- + + +## Overview + +DeepSeek is an innovative open-source large language model (LLM) that brings a revolutionary experience to AI-powered conversations with its advanced algorithmic architecture and reflective reasoning capabilities. By deploying it privately, you gain full control over data security and system configurations while maintaining flexibility in your deployment strategy. + +Dify, an open-source AI application development platform, offers a complete private deployment solution. By seamlessly integrating a locally deployed DeepSeek model into the Dify platform, enterprises can build powerful AI applications within their own infrastructure while ensuring data privacy. + +### Advantages of Private Deployment: + +* **Superior Performance**: Delivers a conversational experience comparable to commercial models. +* **Isolated Environment**: Runs entirely offline, eliminating data leakage risks. +* **Full Data Control**: Retains complete ownership of data assets, ensuring compliance. + +*** + +## Prerequisites + +### **Hardware Requirements:** + +* **CPU**: ≥ 2 Cores +* **RAM/GPU Memory**: ≥ 16 GiB (Recommended) + +### **Software Requirements:** + +* [Docker](https://www.docker.com/) +* Docker Compose +* [Ollama](https://ollama.com/) +* [Dify Community Edition](https://github.com/langgenius/dify) + +*** + +## Deployment Steps + +### 1. Install Ollama + +[Ollama](https://ollama.com/) is a cross-platform LLM management client (MacOS, Windows, Linux) that enables seamless deployment of large language models like DeepSeek, Llama, and Mistral. Ollama provides a one-click model deployment solution, ensuring that all data remains stored locally for complete security and privacy. + +Visit [Ollama's official website](https://ollama.com/) and follow the installation instructions for your platform. After installation, verify it by running the following command: + +```bash +➜ ~ ollama -v +ollama version is 0.5.5 +``` + +Select an appropriate DeepSeek model size based on your available hardware. A 7B model is recommended for initial installation. + +![](https://assets-docs.dify.ai/2025/01/26978571a8d5f7188a952606f62e6a32.png) + +Run the following command to install the DeepSeek R1 model: + +```bash +ollama run deepseek-r1:7b +``` + +![](https://assets-docs.dify.ai/2025/01/9297451d07d7704f73d6db0a83842f5f.png) + +### 2. Install Dify Community Edition + +Clone the Dify GitHub repository and follow the installation process: + +```bash +git clone https://github.com/langgenius/dify.git +cd dify/docker +cp .env.example .env +docker compose up -d # Use `docker-compose up -d` if running Docker Compose V1 +``` + +After running the command, you should see all containers running with proper port mappings. For detailed instructions, refer to [Deploy with Docker Compose](https://docs.dify.ai/getting-started/install-self-hosted/docker-compose). + +Dify Community Edition runs on port 80 by default. You can access your private Dify platform at: `http://your_server_ip` + +### 3. Integrate DeepSeek with Dify + +Go to **Profile → Settings → Model Providers** in the Dify platform. Select **Ollama** and click **Add Model**. + +> Note: The “DeepSeek” option in Model Providers refers to the online API service, whereas the Ollama option is used for a locally deployed DeepSeek model. + +Configure the Model: + +• **Model Name**: Enter the deployed model name, e.g., `deepseek-r1:7b`. + +• **Base URL**: Set the Ollama client’s local service URL, typically `http://your_server_ip:11434`. If you encounter connection issues, please refer to the [FAQ](https://docs.dify.ai/learn-more/use-cases/private-ai-ollama-deepseek-dify#id-1.-connection-errors-when-using-docker); + +• **Other settings**: Keep default values. According to the [DeepSeek model specifications](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B), the max token length is 32,768. + +![](https://assets-docs.dify.ai/2025/01/6f3b53427e46786ba7d1374739344142.png) + +## Build AI Applications + +### DeepSeek AI Chatbot (Simple Application) + +1. On the Dify homepage, click Create Blank App, select Chatbot, and give it a name. + +![](https://assets-docs.dify.ai/2025/01/7f56bc3c836c7248043b656fa95e474e.png) + +2. Select the `deepseek-r1:7b` model under Ollama in the Model Provider section. + +![](https://assets-docs.dify.ai/2025/01/dbd7170abd35f545481ecc0beef85333.png) + +3. Enter a message in the chat preview to verify the model’s response. If it replies correctly, the chatbot is online. + +![](https://assets-docs.dify.ai/2025/01/619fbbd48e55a1e6a598b4039dd631f5.png) + +4. Click the Publish button to obtain a shareable link or embed the chatbot into other websites. + +### DeepSeek AI Chatflow / Workflow (Advanced Application) + +> Chatflow / Workflow applications enable the creation of more complex AI solutions, such as document recognition, image processing, and speech recognition. For more details, please check the [Workflow Documentation](https://docs.dify.ai/guides/workflow). + +1. Click Create Blank App, then select Chatflow or Workflow, and name the application. + +![](https://assets-docs.dify.ai/2025/01/cb8637be4dca5a0e684fd9a21df3711f.png) + +2. Add an LLM Node, select the `deepseek-r1:7b` model under Ollama, and use the `{{#sys.query#}}` variable into the system prompt to connect to the initial node. If you encounter any API issues, you can handle them via [Load Balancing](https://docs.dify.ai/guides/model-configuration/load-balancing) or the [Error Handling](https://docs.dify.ai/guides/workflow/error-handling) node. +3. Add an LLM node, select the deepseek-r1:7b model under the Ollama framework, and insert the `{{#sys.query#}}` variable into the system prompt to connect to the initial node. If you encounter any API issues, you can handle them via Load Balancing or the Error Handling node. + +![](https://assets-docs.dify.ai/2025/01/c21f076398eb09d773d3e543561293e6.png) + +4. Add an End Node to complete the configuration. Test the workflow by entering a query. If the response is correct, the setup is complete. + +![](https://assets-docs.dify.ai/2025/01/820c37c70cb029cba60ca289e8d6e89a.png) + +## FAQ + +### 1. Connection Errors When Using Docker + +If running Dify and Ollama inside Docker results in the following error: + +```bash +httpconnectionpool(host=127.0.0.1, port=11434): max retries exceeded with url:/cpi/chat +(Caused by NewConnectionError(': +fail to establish a new connection:[Errno 111] Connection refused')) +``` + +**Cause**: + +Ollama is not accessible inside the Docker container because localhost refers to the container itself. + +**Solution**: + +**Setting environment variables on Mac**: + +If Ollama is run as a macOS application, environment variables should be set using `launchctl`: + +1. For each environment variable, call `launchctl setenv`. + + ```bash + launchctl setenv OLLAMA_HOST "0.0.0.0" + ``` +2. Restart Ollama application. +3. If the above steps are ineffective, you can use the following method: + + The issue lies within Docker itself, and to access the Docker host. You should connect to `host.docker.internal`. Therefore, replacing `localhost` with `host.docker.internal` in the service will make it work effectively. + + ```bash + http://host.docker.internal:11434 + ``` + +**Setting environment variables on Linux**: + +If Ollama is run as a systemd service, environment variables should be set using `systemctl`: + +1. Edit the systemd service by calling `systemctl edit ollama.service`. This will open an editor. +2. For each environment variable, add a line `Environment` under section `[Service]`: + + ```ini + [Service] + Environment="OLLAMA_HOST=0.0.0.0" + ``` +3. Save and exit. +4. Reload `systemd` and restart Ollama: + + ```bash + systemctl daemon-reload + systemctl restart ollama + ``` + +**Setting environment variables on Windows**: + +On windows, Ollama inherits your user and system environment variables. + +1. First Quit Ollama by clicking on it in the task bar. +2. Edit system environment variables from the control panel. +3. Edit or create New variable(s) for your user account for `OLLAMA_HOST`, `OLLAMA_MODELS`, etc. +4. Click OK/Apply to save. +5. Run `ollama` from a new terminal window. + +### 2. How to Modify the Address and Port of Ollama Service? + +Ollama binds 127.0.0.1 port 11434 by default. Change the bind address with the `OLLAMA_HOST` environment variable. diff --git a/en/management/personal-account-management.mdx b/en/management/personal-account-management.mdx deleted file mode 100644 index 6647388d..00000000 --- a/en/management/personal-account-management.mdx +++ /dev/null @@ -1,57 +0,0 @@ ---- -title: Personal Account Management ---- - -## Modifying Personal Information - -To update your personal account information: - -1. Navigate to the Dify team homepage -2. Click on your avatar in the upper right corner -3. Select **"My Account"** - -You can modify the following details: - -* Avatar -* Username -* Email -* Password - - - Personal Account Management - - -### Integrations - -You can link your GitHub and Google accounts as login methods for your Dify team. Click on your avatar in the upper right corner of the Dify team homepage, then click **"Integrations"** to set up these links. - -### Changing Display Language - -To change the display language, click on your avatar in the upper right corner of the Dify team homepage, then click **"Language"**. Dify supports the following languages: - -* English -* Simplified Chinese -* Traditional Chinese -* Portuguese (Brazil) -* French (France) -* Japanese (Japan) -* Korean (South Korea) -* Russian (Russia) -* Italian (Italy) -* Thai (Thailand) -* Indonesian -* Ukrainian (Ukraine) - -Dify welcomes community volunteers to contribute additional language versions. Visit the [GitHub repository](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) to contribute! - -### Deleting Personal Account - -For team data security considerations, self-service online deletion of personal account information is not currently supported. If you need to completely delete your account, please include the following information in an email and send it to support@dify.ai. - -``` -Delete account: your-email -``` - - - Delete personal account - diff --git a/en/plugins/best-practice/README.mdx b/en/plugins/best-practice/README.mdx new file mode 100644 index 00000000..e2299b17 --- /dev/null +++ b/en/plugins/best-practice/README.mdx @@ -0,0 +1,11 @@ +--- +title: Best Practice +--- + + +For more comprehensive best practices on developing plugins with advanced functionality, please refer to the extended plugin development guidelines. + + + develop-a-slack-bot-plugin.md + + diff --git a/en/plugins/best-practice/develop-a-slack-bot-plugin.mdx b/en/plugins/best-practice/develop-a-slack-bot-plugin.mdx new file mode 100644 index 00000000..3025c304 --- /dev/null +++ b/en/plugins/best-practice/develop-a-slack-bot-plugin.mdx @@ -0,0 +1,334 @@ +--- +title: Develop a Slack Bot Plugin +--- + + +**What You’ll Learn:** + +Gain a solid understanding of how to build a Slack Bot that’s powered by AI—one that can respond to user questions right inside Slack. + +### Project Background + +The Dify plugin ecosystem focuses on making integrations simpler and more accessible. In this guide, we’ll use Slack as an example, walking you through the process of developing a Slack Bot plugin. This allows your team to chat directly with an LLM within Slack, significantly improving how efficiently they can use AI. + +Slack is an open, real-time communication platform with a robust API. Among its features is a webhook-based event system, which is quite straightforward to develop on. We’ll leverage this system to create a Slack Bot plugin, illustrated in the diagram below: + +![Slack Bot diagram ](https://assets-docs.dify.ai/2025/01/a0865d18f1ca4051601ca53fa6f92db2.png) + +> To avoid confusion, the following concepts are explained: +> +> * **Slack Bot** A chatbot on the Slack platform, acting as a virtual user you can interact with in real-time. +> * **Slack Bot Plugin** A plugin in the Dify Marketplace that connects a Dify application with Slack. This guide focuses on how to develop that plugin. + +**How It Works (A Simple Overview):** + +1. **Send a Message to the Slack Bot** + + When a user in Slack sends a message to the Bot, the Slack Bot immediately issues a webhook request to the Dify platform. + +2. **Forward the Message to the Slack Bot Plugin** + + The Dify platform triggers the Slack Bot plugin, which relays the details to the Dify application—similar to entering a recipient’s address in an email system. By setting up a Slack webhook address through Slack’s API and entering it in the Slack Bot plugin, you establish this connection. The plugin then processes the Slack request and sends it on to the Dify application, where the LLM analyzes the user’s input and generates a response. + +3. **Return the Response to Slack** + + Once the Slack Bot plugin receives the reply from the Dify application, it sends the LLM’s answer back through the same route to the Slack Bot. Users in Slack then see a more intelligent, interactive experience right where they’re chatting. + +### Prerequisites + +- **Dify plugin developing tool**: For more information, see [Initializing the Development Tool](../tool-initialization.md). +- **Python environment (version ≥ 3.12)**: Refer to this [Python Installation Tutorial](https://pythontest.com/python/installing-python-3-11/) or ask an LLM for a complete setup guide. +- Create a Slack App and Get an OAuth Token + +Go to the [Slack API platform](https://api.slack.com/apps), create a Slack app from scratch, and pick the workspace where it will be deployed. + +![](https://assets-docs.dify.ai/2025/01/c1fd0ac1467faf5a3ebf3818bb234aa8.png) + +1. **Enable Webhooks:** + +![](https://assets-docs.dify.ai/2025/01/7112e0710300f1db16827e17f3deac00.png) + +2. **Install the App in Your Slack Workspace:** + +![](https://assets-docs.dify.ai/2025/01/88c360ff4f7b04fea52174ce330522fa.png) + +3. **Obtain an OAuth Token** for future plugin development: + +![](https://assets-docs.dify.ai/2025/01/dcd8ec947253f2ef9ae121ed77ec9f26.png) + +### 1. Developing the Plugin + +Now we’ll dive into the actual coding. Before starting, make sure you’ve read [Quick Start: Developing an Extension Plugin](../extension.md) or have already built a Dify plugin before. + +#### 1.1 Initialize the Project + +Run the following command to set up your plugin development environment: + +```bash +dify plugin init +``` + +Follow the prompts to provide basic project info. Select the `extension` template, and grant both `Apps` and `Endpoints` permissions. + +For additional details on reverse-invoking Dify services within a plugin, see [Reverse Invocation: App](../../../api-documentation/fan-xiang-diao-yong-dify-fu-wu/app.md). + +![Plugins permission](https://assets-docs.dify.ai/2024/12/d89a6282c5584fc43a9cadeddf09c0de.png) + +#### 1.2 Edit the Configuration Form + +This plugin needs to know which Dify app should handle the replies, as well as the Slack App token to authenticate the bot’s responses. Therefore, you’ll add these two fields to the plugin’s form. + +Modify the YAML file in the group directory—for example, `group/slack.yaml`. The form’s filename is determined by the info you provided when creating the plugin, so adjust it accordingly. + +**Sample Code:** + +`slack.yaml` + +```yaml +settings: + - name: bot_token + type: secret-input + required: true + label: + en_US: Bot Token + zh_Hans: Bot Token + pt_BR: Token do Bot + ja_JP: Bot Token + placeholder: + en_US: Please input your Bot Token + zh_Hans: 请输入你的 Bot Token + pt_BR: Por favor, insira seu Token do Bot + ja_JP: ボットトークンを入力してください + - name: allow_retry + type: boolean + required: false + label: + en_US: Allow Retry + zh_Hans: 允许重试 + pt_BR: Permitir Retentativas + ja_JP: 再試行を許可 + default: false + - name: app + type: app-selector + required: true + label: + en_US: App + zh_Hans: 应用 + pt_BR: App + ja_JP: アプリ + placeholder: + en_US: the app you want to use to answer Slack messages + zh_Hans: 你想要用来回答 Slack 消息的应用 + pt_BR: o app que você deseja usar para responder mensagens do Slack + ja_JP: あなたが Slack メッセージに回答するために使用するアプリ +endpoints: + - endpoints/slack.yaml +``` + +Explanation of the Configuration Fields: + +``` + - name: app + type: app-selector + scope: chat +``` + +* **type**: Set to app-selector, which allows users to forward messages to a specific Dify app when using this plugin. + +* **scope**: Set to chat, meaning the plugin can only interact with app types such as agent, chatbot, or chatflow. + +Finally, in the `endpoints/slack.yaml` file, change the request method to POST to handle incoming Slack messages properly. + +**Sample Code:** + +`endpoints/slack.yaml` + +```yaml +path: "/" +method: "POST" +extra: + python: + source: "endpoints/slack.py" +``` + +#### 2. Edit the function code + +Modify the `endpoints/slack.py` file and add the following code: + +```python +import json +import traceback +from typing import Mapping +from werkzeug import Request, Response +from dify_plugin import Endpoint +from slack_sdk import WebClient +from slack_sdk.errors import SlackApiError + + +class SlackEndpoint(Endpoint): + def _invoke(self, r: Request, values: Mapping, settings: Mapping) -> Response: + """ + Invokes the endpoint with the given request. + """ + retry_num = r.headers.get("X-Slack-Retry-Num") + if (not settings.get("allow_retry") and (r.headers.get("X-Slack-Retry-Reason") == "http_timeout" or ((retry_num is not None and int(retry_num) > 0)))): + return Response(status=200, response="ok") + data = r.get_json() + + # Handle Slack URL verification challenge + if data.get("type") == "url_verification": + return Response( + response=json.dumps({"challenge": data.get("challenge")}), + status=200, + content_type="application/json" + ) + + if (data.get("type") == "event_callback"): + event = data.get("event") + if (event.get("type") == "app_mention"): + message = event.get("text", "") + if message.startswith("<@"): + message = message.split("> ", 1)[1] if "> " in message else message + channel = event.get("channel", "") + blocks = event.get("blocks", []) + blocks[0]["elements"][0]["elements"] = blocks[0].get("elements")[0].get("elements")[1:] + token = settings.get("bot_token") + client = WebClient(token=token) + try: + response = self.session.app.chat.invoke( + app_id=settings["app"]["app_id"], + query=message, + inputs={}, + response_mode="blocking", + ) + try: + blocks[0]["elements"][0]["elements"][0]["text"] = response.get("answer") + result = client.chat_postMessage( + channel=channel, + text=response.get("answer"), + blocks=blocks + ) + return Response( + status=200, + response=json.dumps(result), + content_type="application/json" + ) + except SlackApiError as e: + raise e + except Exception as e: + err = traceback.format_exc() + return Response( + status=200, + response="Sorry, I'm having trouble processing your request. Please try again later." + str(err), + content_type="text/plain", + ) + else: + return Response(status=200, response="ok") + else: + return Response(status=200, response="ok") + else: + return Response(status=200, response="ok") +``` + +### 2. Debugging Plugins + +```markdown +### 2. Debug the Plugin + +Go to the Dify platform and obtain the remote debugging address and key for your plugin. + +![](https://assets-docs.dify.ai/2025/01/8d24006f0cabf5bf61640a9023c45db8.png) + +Back in your plugin project, copy the `.env.example` file and rename it to `.env`. + +```bash +INSTALL_METHOD=remote +REMOTE_INSTALL_HOST=remote +REMOTE_INSTALL_PORT=5003 +REMOTE_INSTALL_KEY=****-****-****-****-**** +``` + +Run `python -m main` to start the plugin. You should now see your plugin installed in the Workspace on Dify’s plugin management page. Other team members will also be able to access it. + +```bash +python -m main +``` + +#### Configure the Plugin Endpoint + +From the plugin management page in Dify, locate the newly installed test plugin and create a new endpoint. Provide a name, a Bot token, and select the app you want to connect. + + + +After saving, a **POST** request URL is generated: + + + +Next, complete the Slack App setup: + +1. **Enable Event Subscriptions** + ![](https://assets-docs.dify.ai/2025/01/1d33bb9cde78a1b5656ad6a0b8350195.png) + + Paste the POST request URL you generated above. + ![](https://assets-docs.dify.ai/2025/01/65aa41f37c3800af49e944f9ff28e121.png) + +2. **Grant Required Permissions** + ![](https://assets-docs.dify.ai/2025/01/25c38a2cf10ec6c55ae54970d790f37e.png) + +--- + +### 3. Verify the Plugin + +In your code, `self.session.app.chat.invoke` is used to call the Dify application, passing in parameters such as `app_id` and `query`. The response is then returned to the Slack Bot. Run `python -m main` again to restart your plugin for debugging, and check whether Slack correctly displays the Dify App’s reply: + +![](https://assets-docs.dify.ai/2025/01/6fc872d1343ce8503d63c5222f7f26f9.png) + +--- + +### 4. Package the Plugin (Optional) + +Once you confirm that the plugin works correctly, you can package and name it via the following command. After it runs, you’ll find a `slack_bot.difypkg` file in the current directory—your final plugin package. + +```bash +# Replace ./slack_bot with your actual plugin project path. + +dify plugin package ./slack_bot +``` + +Congratulations! You’ve successfully developed, tested, and packaged a plugin! + +--- + +### 5. Publish the Plugin (Optional) + +You can now upload it to the [Dify Marketplace repository](https://github.com/langgenius/dify-plugins) for public release. Before publishing, ensure your plugin meets the [Plugin Publishing Guidelines](https://docs.dify.ai/zh-hans/plugins/publish-plugins/publish-to-dify-marketplace). Once approved, your code is merged into the main branch, and the plugin goes live on the [Dify Marketplace](https://marketplace.dify.ai/). + +--- + +### Further Reading + +For a complete Dify plugin project example, visit the [GitHub repository](https://github.com/langgenius/dify-plugins). You’ll also find additional plugins with full source code and implementation details. + +If you want to explore more about plugin development, check the following: + +**Quick Starts:** +- [Develop an Extension Plugin](../extension.md) +- [Develop a Model Plugin](../model/) +- [Bundle Plugins: Packaging Multiple Plugins](../bundle.md) + +**Plugin Interface Docs:** +- [Manifest](../../../api-documentation/manifest.md) structure +- [Endpoint](../../../api-documentation/endpoint.md) definitions +- [Reverse-Calling Dify Services](../../../api-documentation/fan-xiang-diao-yong-dify-fu-wu/) +- [Tools](../../../api-documentation/tool.md) +- [Models](../../../api-documentation/model/) +``` + diff --git a/en/plugins/faq.mdx b/en/plugins/faq.mdx new file mode 100644 index 00000000..762ab10d --- /dev/null +++ b/en/plugins/faq.mdx @@ -0,0 +1,28 @@ +--- +title: FAQ +description: 'Author: Allen' +--- + +## How to handle plugin upload failure during installation? + +**Error Details**: The error message shows `PluginDaemonBadRequestError: plugin_unique_identifier is not valid`. + +**Solution**: Modify the `author` field in both the `manifest.yaml` file in the plugin project and the `.yaml` file under the `/provider` path to your GitHub ID. + +Retype the plugin packaging command and install the new plugin package. + +## How to handle errors when installing plugins? + +**Issue**: If you encounter the error message: `plugin verification has been enabled, and the plugin you want to install has a bad signature`, how to handle the issue? + +**Solution**: Add the following line to the end of your `/docker/.env` configuration file: `FORCE_VERIFYING_SIGNATURE=false`. Run the following commands to restart the Dify service: + +```bash +cd docker +docker compose down +docker compose up -d +``` + +Once this field is added, the Dify platform will allow the installation of all plugins that are not listed (and thus not verified) in the Dify Marketplace. + +**Note**: For security reasons, always install plugins from unknown sources in a test or sandbox environment first. Confirm their safety before deploying to the production environment. diff --git a/en/plugins/introduction.mdx b/en/plugins/introduction.mdx new file mode 100644 index 00000000..9b328f1b --- /dev/null +++ b/en/plugins/introduction.mdx @@ -0,0 +1,106 @@ +--- +title: Introduction +--- + + +> To access the plugin’s functionality in the Community Edition, please update the version to v1.0.0. + +## **What is the Plugin?** + +Plugin is a more developer-friendly and highly extensible third-party service extension module. While the Dify platform already includes numerous tools maintained by the Dify team and community contributors, the existing tools may not fully meet the demands of various niche scenarios. Additionally, developing and integrating new tools into the Dify platform often requires a lengthy process. + +To enable more agile development, we have opened up the ecosystem and provided a comprehensive plugin development SDK. This empowers every developer to easily **build their own tools and seamlessly integrate third-party models and tools**, enhancing application capabilities significantly. + +## What is the Advantage of Plugin? + +The new plugin system goes beyond the limitations of the previous framework, offering richer and more powerful extension capabilities. It contains five distinct plugin types, each designed to solve well-defined scenarios, giving developers limitless freedom to customize and enhance Dify applications. + +Additionally, the plugin system is designed to be easily shared. You can distribute your plugins via the [Dify Marketplace](https://marketplace.dify.ai/), [GitHub](publish-plugins/publish-plugin-on-personal-github-repo/), or as a [Local file package](publish-plugins/package-and-publish-plugin-file/). Other developers can quickly install these plugins and benefit from them. + +> Dify Marketplace is an open ecosystem designed for developers, offering a broad range of resources—models, tools, AI Agents, Extensions, and plugin bundles. You can seamlessly integrate third-party services into your existing Dify applications through the Marketplace, enhancing their capabilities and advancing the overall Dify community. + +Whether you’re looking to integrate a new model or add a specialized tool to expand Dify’s existing features, the robust plugin marketplace has the resources you need. **We encourage more developers to join and help shape the Dify ecosystem, benefiting everyone involved.** + +![](https://assets-docs.dify.ai/2025/01/83f9566063db7ae4886f6a139f3f81ff.png) + +## **What Are the Types of Plugin?** + +* **Models** + + These plugins integrate various AI models (including mainstream LLM providers and custom model) to handle configuration and requests for LLM APIs. For more on creating a model plugin, take refer to [Quick Start: Model Plugin](https://docs.dify.ai/plugins/quick-start/develop-plugins/model-plugin). +* **Tools** + + Tools refer to third-party services that can be invoked by Chatflow, Workflow, or Agent-type applications. They provide a complete API implementation to enhance the capabilities of Dify applications. For example, developing a Google Search plugin, please refer to [Quick Start: Tool Plugin](quick-start/develop-plugins/tool-plugin.md). +* **Agent Strategy** + + The Agent Strategy plugin defines the reasoning and decision-making logic within an Agent node, including tool selection, invocation, and result processing. +* Agent strategy plugins define the internal reasoning and decision-making logic within agent nodes. They encompass the logic for tool selection, invocation, and handling of returned results by the LLM. For further development guidance, please refer to the [Quick Start: Agent Strategy Plugin](quick-start/develop-plugins/agent-strategy-plugin.md). +* **Extensions** + + Lightweight plugins that only provide endpoint capabilities for simpler scenarios, enabling fast expansions via HTTP services. This approach is ideal for straightforward integrations requiring basic API invoking. For more details, refer to [Quick Start: Extension Plugin](quick-start/develop-plugins/extension-plugin.md). +* **Bundle** + + A “plugin bundle” is a collection of multiple plugins. Bundles allow you to install a curated set of plugins all at once—no more adding them one by one. For more information on creating plugin bundles, see [Plugin Development: Bundle Plugin](quick-start/develop-plugins/bundle.md). + +### **What’s New in Plugins?** + +* **Extend LLM’s Multimodal Capabilities** + + Plugins can boost an LLM’s ability to handle multimedia. Developers can add tasks like image editing, video processing, and more—ranging from cropping and background removal to working with portrait images. +* **Developer-Friendly Debugging Capabilities** + + The plugin system supports popular IDEs and debugging tools. You just configure a few environment variables to remotely connect to a Dify instance—even one running as a SaaS. Any actions you take on that plugin in Dify are forwarded to your local runtime for debugging. +* **Persistent Data Storage** + + Designed for more complex use cases, the plugin system now includes data persistence: + + * **Plugin-Level Data Storage**: You can share workspace-level information with plugins, enabling richer custom features. + * **Built-In Data Management**: Plugins can reliably store and manage data, making it easier to implement complex business logic. +* **Convenient Reverse Invocation** + + Plugins can now interact bidirectionally with Dify’s core functions, including: + + * AI model invokes + * Tool usage + * Application access + * Knowledge base interaction + * Function node calls (such as question classification, parameter extraction, etc.) + + This two-way mechanism allows plugins to act not only as a way to leverage existing Dify capabilities, but also as a standalone gateway—expanding the usage scenarios for your applications. +* **Enhanced Endpoint Customization Capabilities** + + Beyond the existing Dify app APIs (like Chatbot or Workflow APIs), you can now create custom APIs within plugins. Developers can wrap their business logic as a plugin, host it on the [Dify Marketplace](https://marketplace.dify.ai/), and automatically get endpoint support for data processing and request handling. + +## Learn More + +**Quick Start** + +To quickly install and use plugins, take refer to: + + + install-plugins.md + + +To start developing plugins, take refer to: + + + develop-plugins + + +**Publishing Plugins** + +To publish your plugin on the [Dify Marketplace](https://marketplace.dify.ai/), fill out the required information and usage documentation. Then submit your plugin code to the [GitHub repository](https://github.com/langgenius/dify-plugins). Once approved, it will be listed in the marketplace: + + + publish-to-dify-marketplace + + +Beyond the official Dify Marketplace, you can also host your plugin on a personal GitHub repository or package it as a file for direct sharing: + + + publish-to-dify-marketplace + + + + package-plugin-file-and-publish.md + diff --git a/en/plugins/manage-plugins.mdx b/en/plugins/manage-plugins.mdx new file mode 100644 index 00000000..751d72da --- /dev/null +++ b/en/plugins/manage-plugins.mdx @@ -0,0 +1,40 @@ +--- +title: Manage Plugins +--- + + +This document will guide Workspace owners and administrators in configuring and managing plugin permission settings. Plugin permission controls determine which users can perform plugin-related operations. + +### **Adjusting Plugin Permissions** + +Team owners and administrators can control the following plugin permissions on the **"Plugins"** page in the top right corner of the Dify platform homepage: + +* **Install and Manage Plugin Permissions** + + This permission controls who can install and manage plugins in the system. Options: + + * **Everyone**: Allows all users in the Workspace to install and manage plugins + * **Admins**: Only Workspace administrators can install and manage plugins + * **No one**: No one is allowed to install and manage plugins +* **Plugin Debug Permissions** + + This permission controls who can perform plugin debugging work. Options: + + * **Everyone**: Allows all users in the Workspace to debug plugins + * **Admins**: Only Workspace administrators can debug plugins + * **No one**: No one is allowed to debug plugins + +![Plugins permission](https://assets-docs.dify.ai/2024/12/a2bca75a7757b7cafae2cb4ba0ad9fff.png) + +### **Upgrading Plugins** + +Click the "Plugins" button in the top right corner of the Dify platform, select the plugin that needs updating, and click the **"Upgrade"** button next to the plugin title. + +![](https://assets-docs.dify.ai/2024/12/83bd5ec12ec914c73d0ea2a5992cd6df.png) + +### **Deleting Plugins** + +Click the "Plugins" button in the top right corner of the Dify platform to see all installed plugins in the current Workspace. Click the "Delete" icon or the Remove button on the right side of the plugin details page to remove the plugin. + +![Remove plugin](https://assets-docs.dify.ai/2024/12/6cb1c000d20720c16ae3c0a70df26fd3.png) + diff --git a/en/plugins/publish-plugins/README.mdx b/en/plugins/publish-plugins/README.mdx new file mode 100644 index 00000000..f36ef518 --- /dev/null +++ b/en/plugins/publish-plugins/README.mdx @@ -0,0 +1,76 @@ +--- +title: Publish Plugins +--- + + +### Publish Methods + +To accommodate the various publishing needs of developers, Dify provides three plugin publish methods: + +#### **1. Marketplace** + +**Introduction**: The official Dify plugin marketplace allows users to browse, search, and install a variety of plugins with just one click. + +**Features**: + +* Plugins become available after passing a review, ensuring they are **trustworthy** and **high-quality**. +* Can be installed directly into an individual or team **Workspaces**. + +**Publication Process**: + +* Submit the plugin project to the **Dify Marketplace** [code repository](https://github.com/langgenius/dify-plugins). +* After an official review, the plugin will be publicly released in the marketplace for other users to install and use. + +For detailed instructions, please refer to: + + + publish-to-dify-marketplace + + +#### 2. **GitHub Repository** + +**Introduction**: Open-source or host the plugin on **GitHub** makes it easy for others to view, download, and install. + +**Features**: + +* Convenient for **version management** and **open-source sharing**. +* Users can install the plugin directly via a link, bypassing platform review. + +**Publication Process**: + +* Push the plugin code to a GitHub repository. +* Share the repository link, users can integrate the plugin into their **Dify Workspace** through the link. + +For detailed instructions, please refer to: + + + publish-plugin-on-personal-github-repo.md + + +#### Plugin File (Local Installation) + +**Introduction**: Package the plugin as a local file (e.g., `.difypkg` format) and share it for others to install. + +**Features**: + +* Does not depend on an online platform, enabling **quick and flexible** sharing of plugins. +* Suitable for **private plugins** or **internal testing**. + +**Publication Process**: + +* Package the plugin project as a local file. +* Click **Upload Plugin** on the Dify plugin page and select the local file to install the plugin. + +You can package the plugin project as a local file and share it with others. After uploading the file on the plugin page, the plugin can be installed into the Dify Workspace. + +For detailed instructions, please refer to: + + + Package as Local File and Share + + +### **Publication Recommendations** + +* **Looking to promote a plugin** → **Recommended to use the Marketplace**, ensuring plugin quality through official review and increasing exposure. +* **Open-source sharing project** → **Recommended to use GitHub**, convenient for version management and community collaboration. +* **Quick distribution or internal testing** → **Recommended to use plugin file**, allowing for straightforward and efficient installation and sharing. \ No newline at end of file diff --git a/en/plugins/publish-plugins/package-plugin-file-and-publish.mdx b/en/plugins/publish-plugins/package-plugin-file-and-publish.mdx new file mode 100644 index 00000000..604689db --- /dev/null +++ b/en/plugins/publish-plugins/package-plugin-file-and-publish.mdx @@ -0,0 +1,47 @@ +--- +title: Package the Plugin File and Publish it +--- + + +After completing plugin development, you can package your plugin project as a local file and share it with others. Once the plugin file is obtained, it can be installed into a Dify Workspace. This guide will show you how to package a plugin project as a local file and how to install plugins using local files. + +### **Prerequisites** + +You'll need the Dify plugin development scaffolding tool for packaging plugins. Download the tool from the [official GitHub releases page](https://github.com/langgenius/dify-plugin-daemon/releases). + +See the [Initialize Development Tools](../quick-start/develop-plugins/initialize-development-tools.md) tutorial for dependency installation and configuration steps. + +Select and download the version appropriate for your operating system from the release assets. + +Using **macOS with M-series chips** as an example, download the `dify-plugin-darwin-arm64` file, then navigate to the file's location in terminal and grant execution permissions: + +```bash +chmod +x dify-plugin-darwin-arm64 +``` + +For global use of the scaffolding tool, it's recommended to rename the binary file to `dify` and copy it to the `/usr/local/bin` system path. + +After configuration, enter the `dify version` command in terminal to verify version number output. + +### **Packaging Plugins** + +After completing plugin project development, ensure remote connection testing is done. To package plugins, navigate to the parent directory of your plugin project and run the following packaging command: + +```bash +cd ../ +dify plugin package ./your_plugin_project +``` + +After running the command, a file with `.difypkg` extension will be generated in the current path. + +![](https://assets-docs.dify.ai/2024/12/98e09c04273eace8fe6e5ac976443cca.png) + +### **Installing Plugins** + +Visit the Dify plugin management page, click **Install Plugin** → **Install via Local File** in the top right corner, or drag and drop the plugin file to a blank area of the page to install. + +![](https://assets-docs.dify.ai/2024/12/8c31c4025a070f23455799f942b91a57.png) + +### **Publishing Plugins** + +You can share the plugin file with others or upload it to the internet for download. diff --git a/en/plugins/publish-plugins/publish-plugin-on-personal-github-repo.mdx b/en/plugins/publish-plugins/publish-plugin-on-personal-github-repo.mdx new file mode 100644 index 00000000..f352b15a --- /dev/null +++ b/en/plugins/publish-plugins/publish-plugin-on-personal-github-repo.mdx @@ -0,0 +1,87 @@ +--- +title: Publish to Your Personal GitHub Repository +--- + + +You can install plugins through GitHub repository links. After developing a plugin, you can choose to publish it to a public GitHub repository for others to download and use. This method offers the following advantages: + +• **Personal Management**: Complete control over plugin code and updates + +• **Quick Sharing**: Easily share with other users or team members via GitHub links for testing and use + +• **Collaboration and Feedback**: Open-sourcing your plugin may attract potential collaborators on GitHub who can help improve it quickly + +This guide will show you how to publish plugins to a GitHub repository. + +### **Prerequisites** + +* GitHub account +* New public GitHub repository +* Git tools installed locally + +For basic GitHub knowledge, please refer to [GitHub documentation](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository). + +### **1. Prepare Plugin Project** + +Publishing to public GitHub means your plugin will be open source. Ensure you've completed debugging and verification, and have a comprehensive `README.md` file. + +Recommended README contents: + +* Plugin introduction and feature description +* Installation and configuration steps +* Usage examples +* Contact information or contribution guidelines + +### **2. Initialize Local Plugin Repository** + +Before publishing to GitHub, ensure debugging and verification are complete. Navigate to the plugin project folder in terminal and run: + +```bash +git init +git add . +git commit -m "Initial commit: Add plugin files" +``` + +If this is your first time using Git, you may also need to configure your Git username and email: + +```bash +git config --global user.name "Your Name" +git config --global user.email "your.email@example.com" +``` + +### **3. Connect Remote Repository** + +Use this command to connect local repository to GitHub: + +```bash +git remote add origin https://github.com//.git +``` + +### **4. Upload Plugin Files** + +Push project to GitHub repository: + +```bash +git branch -M main +git push -u origin main +``` + +Recommended to add tags for future packaging: + +```bash +git tag -a v0.0.1 -m "Release version 0.0.1" +git push origin v0.0.1 +``` + +### **5. Package Plugin Code** + +Go to the Releases page of your GitHub repository and create a new release. Upload plugin files when publishing. For detailed instructions on packaging plugins, please read the packaging plugins documentation. + +![Packaging Plugins](https://assets-docs.dify.ai/2024/12/5cb4696348cc6903e380287fce8f529d.png) + +### **Installing Plugins via GitHub** + +Others can install the plugin using the GitHub repository address. Visit the Dify platform's plugin management page, choose to install via GitHub, enter the repository address, select version number and package file to complete installation. + +![](https://assets-docs.dify.ai/2024/12/3c2612349c67e6898a1f33a7cc320468.png) + diff --git a/en/plugins/publish-plugins/publish-to-dify-marketplace/README.mdx b/en/plugins/publish-plugins/publish-to-dify-marketplace/README.mdx new file mode 100644 index 00000000..bbe39220 --- /dev/null +++ b/en/plugins/publish-plugins/publish-to-dify-marketplace/README.mdx @@ -0,0 +1,91 @@ +--- +title: Introduction +--- + +**Dify Marketplace** welcomes plugin submission requests from both partners and community developers. Your contributions will broaden the scope of possibilities for Dify plugins. This guide provides clear publishing procedures and best-practice recommendations to help you successfully release your plugin and create value for the community. + +Follow these steps to submit your plugin as a Pull Request (PR) in the [GitHub repository](https://github.com/langgenius/dify-plugins) for review. Once approved, your plugin will officially launch on the Dify Marketplace. + +### Submitting a Pull Request (PR) for Review + +To publish your plugin on the Dify Marketplace, follow these steps: + +1. Develop and test your plugin according to the [Plugin Developer Guidelines](plugin-developer-guidelines.md). +2. Write a [Plugin Privacy Policy](plugin-privacy-protection-guidelines.md) for your plugin in line with Dify’s privacy policy requirements. In your plugin’s [Manifest](../../schema-definition/manifest.md) file, include the file path or URL for this privacy policy. +3. Package your plugin for distribution. +4. Fork the [Dify Plugins](https://github.com/langgenius/dify-plugins) repository on GitHub. +5. Create an organization directory under the repository’s main structure, then create a subdirectory named after your plugin. Place your plugin’s source code and the packaged `.pkg` file in that subdirectory. +6. Submit a Pull Request (PR) following the required PR template format, then wait for the review; +7. Once approved, your plugin code will merge into the main branch, and the plugin will be automatically listed on the [Dify Marketplace](https://marketplace.dify.ai/). + +Here is the general plugin review process: + +![The process of uploading plugins](https://assets-docs.dify.ai/2025/01/05df333acfaf662e99316432db23ba9f.png) + + +**Note**: + +The Contributor Agreement mentioned above refers to [Plugin Developer Guidelines](plugin-developer-guidelines.md). + + +*** + +### **During Pull Request (PR) Review** + +Respond proactively to reviewer questions and feedback. + +* PR comments unresolved within **14 days** will be marked as stale (they can be reopened). +* PR comments unresolved within **30 days** will be closed (they cannot be reopened, and a new PR must be created). + +*** + +### **After Pull Request (PR) Approval** + +**1. Ongoing Maintenance** + +* Address user-reported issues and feature requests +* Migrate plugins when major API changes occur: + * Dify will provide advance notice of changes and migration instructions. + * Dify engineers can provide migration support. + +**2. Marketplace Public Beta Testing Restrictions** + +* Avoid introducing breaking changes to existing plugins. + +*** + +### **Review Process** + +**1. Review Order** + +* Pull Requests (PRs) are handled on a **first-come, first-served** basis. +* Review will generally begin within one week. If there’s a delay, the reviewer will comment on the PR to inform the author. + +**2. Review Focus** + +* Verify plugin name, description, and setup instructions are clear and instructive. +* Check if the plugin’s [Manifest](https://docs.dify.ai/plugins/schema-definition/manifest) file follows format specifications and includes valid author contact information. + +**3. Plugin Functionality and Relevance** + +* Test plugins according to [Plugin Development Guidelines](plugin-developer-guidelines.md). +* Ensure the plugin's purpose is reasonable within the Dify ecosystem. + +[Dify.AI](https://dify.ai/) reserves the right to accept or reject plugin submissions. + +*** + +### **Frequently Asked Questions** + +1. **How do I know if my plugin is unique?** + +Example: A Google search plugin that only adds language parameters should probably be submitted as an extension to an existing plugin. However, if the plugin implements significant functional improvements (like optimized batch processing or error handling), it can be submitted as a new plugin. + +2. **What if my PR is marked as stale or closed?** + +* A stale PR can be reopened once you’ve addressed the requested changes. +* A closed PR (over 30 days old) requires opening a new PR. + +3. **Can I update my plugin during the beta test phase?** + +Yes, but breaking changes should be avoided. diff --git a/en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines.mdx b/en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines.mdx new file mode 100644 index 00000000..88611251 --- /dev/null +++ b/en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines.mdx @@ -0,0 +1,54 @@ +--- +title: Plugin Developer Guidelines +--- + + +### Before Submitting a Pull Request (PR) + +1. **Check Plugin's Functionality and Complete Documentation** + +* Verify that the plugin works as intended. +* Provide a comprehensive **README file** including: + * Setup and usage instructions + * Required codes, APIs, credentials, or other information needed to connect the plugin to services +* Ensure collected user information is only used for service connectivity and plugin improvements. +* Prepare the plugin's privacy policy file or URL according to the [Plugin Privacy Protection Guidelines](plugin-developer-guidelines.md). + +2. **Validate Plugin Value Proposition** + +* Ensure your plugin provides unique value to Dify users. +* The plugin should introduce features or services not currently available in Dify or other plugins. +* Follow community standards: + * Non-violent content that respects the global user base + * Compliance with integrated service policies +* **How to Check for Similar Plugins?** + * Avoid submitting functionality that duplicates existing plugins or PRs unless your plugin: + * Introduces new features + * Provides performance improvements + * **Determining Plugin Uniqueness:** + * If the plugin makes minor adjustments to existing functionality (like adding language parameters), consider extending the existing plugin. + * If the plugin implements significant functional changes (like optimized batch processing or improved error handling), submit it as a new plugin. + * Not sure? Include a brief explanation in your PR describing why a new plugin is needed. + +**Example:** + +Consider a Google Search plugin that takes a single input query and outputs Google search results using the Google Search API. + +If you're offering a new Google Search plugin with similar underlying implementation but minor input adjustments (e.g., adding new language parameters), we recommend extending the existing plugin. + +However, if you've implemented the plugin differently with optimized batch search capabilities and error handling, it may be reviewed as a separate plugin. + + +3. **Ensure Compliance with Privacy Data Standards** + +**Information Disclosure Requirements:** + +* Developers **must** declare whether they collect any type of user personal data when submitting applications/tools. +* If collecting data, **briefly list** the types of data collected (e.g., username, email, device ID, location information, etc.) - detailed explanations are not necessary. +* Developers **must** provide a privacy policy link that states what information is collected, how it's used, what information is shared with third parties, and links to third-party privacy policies. + +**Review Focus:** + +* **Format Review:** Check if data collection has been declared as required. +* **High-Risk Data Screening:** Focus on whether sensitive data is collected (e.g., health information, financial information, children's personal information). **Additional review** of usage purpose and security measures is required if sensitive data is collected. +* **Malicious Behavior Screening:** Check for obvious malicious behavior, such as collecting data without user consent or uploading user data to unknown servers. \ No newline at end of file diff --git a/en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.mdx b/en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.mdx new file mode 100644 index 00000000..5e68e968 --- /dev/null +++ b/en/plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.mdx @@ -0,0 +1,76 @@ +--- +title: Plugin Privacy Protection Guidelines +--- + + +When you are submitting your Plugin to Dify Marketplace, you are required to be transparent in how you handle user data. The following guidelines focus on how to address privacy-related questions and user data processing for your plugin. + +Center your privacy policy around the following points: + +**Does your plugin collect and use any user personal data?** If it does, please list the types of data collected. + +> “Personal data” refers to any information that can identify a specific individual—either on its own or when combined with other data—such as information used to locate, contact, or otherwise target a unique person. + +#### 1. List the types of data collected + +**Type A:** **Direct Identifiers** + +* Name (e.g., full name, first name, last name) +* Email address +* Phone number +* Home address or other physical address +* Government-issued identification numbers (e.g., Social Security number, passport number, driver's license number) + +**Type B**: **Indirect Identifiers** + +* Device identifiers (e.g., IMEI, MAC address, device ID) +* IP address +* Location data (e.g., GPS coordinates, city, region) +* Online identifiers (e.g., cookies, advertising IDs) +* Usernames +* Profile pictures +* Biometric data (e.g., fingerprints, facial recognition data) +* Web browsing history +* Purchase history +* Health information +* Financial information + +**Type C: Data that can be combined with other data to identify an individual:** + +* Age +* Gender +* Occupation +* Interests + +While your plugin may not collect any personal information, you also need to make sure the use of third-party services within your plugin may involve data collection or processing. As the plugin developer, you are responsible for disclosing all data collection activities associated within your plugin, including those performed by third-party services. Thus, make sure to read through the privacy policy of the third-party service and verify any data collected by your plugin is claimed in your submission. + +For example, if the plugin you are developing involves Slack services, make sure to reference [Slack’s privacy policy](https://slack.com/trust/privacy/privacy-policy) in your plugin’s privacy policy statement and clearly disclose the data collection practices. + +#### **2. Submit the most up to date Privacy Policy of your Plugin** + +**The Privacy Policy** should contain: + +* What types of data are collected. +* How the collected data is used. +* Whether any data is shared with third parties, and if so, identify those third parties and provide links to their privacy policies. +* If you have no clue of how to write a privacy policy, you can also check the privacy policy of Plugins issued by Dify Team. + +#### 3. Introducing a privacy policy statement within the plugin Manifest file + +For detailed instructions on filling out specific fields, please refer to the [Manifest](https://docs.dify.ai/plugins/schema-definition/manifest) documentation. + +**FAQ** + +1. **What does "collect and use" mean regarding user personal data? Are there any common examples of how personal data is collected and used in any Plugin?** + +"Collect and use" user data generally refers to the collection, transmission, use, or sharing of user data. Common examples of how products may handle personal or sensitive user data include: + +* Using forms that gather any kind of personally identifiable information. +* Implementing login features, even when using third-party authentication services. +* Collecting information about input or resources that may contain personally identifiable information. +* Implementing analytics to track user behavior, interactions, and usage patterns. +* Storing communication data like messages, chat logs, or email addresses. +* Accessing user profiles or data from connected social media accounts. +* Collecting health and fitness data such as activity levels, heart rate, or medical information. +* Storing search queries or tracking browsing behavior. +* Processing financial information including bank details, credit scores, or transaction history. diff --git a/en/plugins/quick-start/README.mdx b/en/plugins/quick-start/README.mdx new file mode 100644 index 00000000..54f2792d --- /dev/null +++ b/en/plugins/quick-start/README.mdx @@ -0,0 +1,54 @@ +--- +title: Quick Start +--- + + +Based on your specific needs, it is recommended to selectively follow the documentation paths below: + +### I Want to Use Plugins + +If you’re looking to quickly install and start using plugins, prioritize the following content: + + + install-plugins.md + + +### I Want to Develop Plugins + +If you plan to develop plugins yourself, follow these steps to progressively dive deeper: + +#### 1. Quick Start + +Explore development examples for various plugin types to grasp the basic structure and development process quickly. + + + tool-plugin.md + + + + model-plugin + + + + agent-strategy-plugin.md + + + + extension-plugin.md + + + + bundle.md + + +#### 2. Advanced Development + +Read the **Endpoint Documentation** to gain an in-depth understanding of **key interfaces** and **implementation details** in plugin development. + + + schema-definition + + +You can tailor your learning based on your specific needs to efficiently master the use or development of plugins and achieve your goals effectively. + +This section primarily focuses on guiding you through the installation and use of plugins. diff --git a/en/plugins/quick-start/debug-plugin.mdx b/en/plugins/quick-start/debug-plugin.mdx new file mode 100644 index 00000000..572fbe1b --- /dev/null +++ b/en/plugins/quick-start/debug-plugin.mdx @@ -0,0 +1,31 @@ +--- +title: Debug Plugin +--- + + + + +Once plugin development is complete, the next step is to test whether the plugin runs correctly. Dify provides remote debugging method. + +Go to ["Plugin"](https://cloud.dify.ai/plugins) page to get the debugging key and remote URL. + +![](https://assets-docs.dify.ai/2024/11/1cf15bc59ea10eb67513c8bdca557111.png) + +Go back to the plugin project, copy the `.env.example` file and rename it to `.env`. Fill it with the remote server address and debug key. + +`.env` file + +```bash +INSTALL_METHOD=remote +REMOTE_INSTALL_HOST=remote +REMOTE_INSTALL_PORT=5003 +REMOTE_INSTALL_KEY=****-****-****-****-**** +``` + +Run the `python -m main` command to start the plugin. You can see on the plugin page that the plugin has been installed into Workspace. Other team members can also access the plugin. + +![](https://assets-docs.dify.ai/2024/12/e11acb42ccb23c824f400b7e19fb2952.png) + +You can initialize this model provider by entering the API Key in **Settings → Model Provider**. + +![](https://assets-docs.dify.ai/2024/12/662de537d70a3607c240a05294a9f3e1.png) diff --git a/en/plugins/quick-start/develop-plugins/README.mdx b/en/plugins/quick-start/develop-plugins/README.mdx new file mode 100644 index 00000000..ce6c745d --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/README.mdx @@ -0,0 +1,67 @@ +--- +title: Develop Plugins +--- + + +### **Quick Start** + +You can quickly understand how to develop different types of plugins and master the functional components involved in plugin development through these development examples: + + + initialize-development-tools.md + + +Using the **Google Search** tool as an example to demonstrate how to develop tool-type plugins. For more details, please take refer to the following: + + + tool-plugin.md + + +By examining the **Anthropic** and **Xinference** models, we present separate guides on how to develop predefined model plugins and custom model plugins. + +* Predefined models are pre-trained and validated, typically commercial models (such as the GPT series and Claude series models). These models can be utilized directly to accomplish specific tasks without additional training or configuration. +* Custom model plugins enable developers to integrate privately trained or specifically configured models tailored to meet local needs. + +For development examples, refer to the following content: + + + model-plugin + + +Extension plugins enable developers to package business code as plugins and automatically provide an Endpoint request entry, functioning akin to an API service hosted on the Dify platform. For more details please take refer to: + + + extension-plugin.md + + +### **Endpoints Documentation** + +If you want to read detailed interface documentation for plugin projects, you can refer to these standard specification documents: + +1. [General Specifications](../../schema-definition/general-specifications.md) +2. [Manifest Definitions](../../schema-definition/manifest.md) +3. [Tool Integration Definitions](../../schema-definition/tool.md) +4. [Model Integration Introduction](../../schema-definition/model/) +5. [Endpoint Definitions](../../schema-definition/endpoint.md) +6. [Extended Agent Strategy](../../schema-definition/agent.md) +7. [Reverse Invocation of the Dify](../../schema-definition/reverse-invocation-of-the-dify-service/) Services + 1. [Reverse Invoking Apps](../../schema-definition/reverse-invocation-of-the-dify-service/app.md) + 2. [Reverse Invoking Models](../../schema-definition/reverse-invocation-of-the-dify-service/model.md) + 3. [Reverse Invoking Nodes](../../schema-definition/reverse-invocation-of-the-dify-service/node.md) + 4. [Reverse Invoking Tools](../../schema-definition/reverse-invocation-of-the-dify-service/tool.md) +8. [Plugin Persistence Storage Capabilities](../../schema-definition/persistent-storage.md) + +### **Contribution Guidelines** + +Want to contribute code and features to Dify Marketplace? + +We provide detailed development guidelines and contribution guidelines to help you understand our architecture design and contribution process: + +* [Dify Plugin Contribution Guidelines](../../publish-plugins/publish-to-dify-marketplace/) + + Learn how to submit your plugin to the Dify Marketplace to share your work with a broader developer community. +* [GitHub Publishing Guidelines](../../publish-plugins/publish-plugin-on-personal-github-repo.md) + + Discover how to publish and manage your plugins on GitHub, ensuring ongoing optimization and collaboration with the community. + +Welcome to join us and become our contributors, and help to enhance the Dify ecosystem alongside developers worldwide! diff --git a/en/plugins/quick-start/develop-plugins/agent-strategy-plugin.mdx b/en/plugins/quick-start/develop-plugins/agent-strategy-plugin.mdx new file mode 100644 index 00000000..02b34361 --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/agent-strategy-plugin.mdx @@ -0,0 +1,1086 @@ +--- +title: Agent Strategy Plugin +--- + + +An **Agent Strategy Plugin** helps an LLM carry out tasks like reasoning or decision-making, including choosing and calling tools, as well as handling results. This allows the system to address problems more autonomously. + +Below, you’ll see how to develop a plugin that supports **Function Calling** to automatically fetch the current time. + +### Prerequisites + +- Dify plugin scaffolding tool +- Python environment (version ≥ 3.12) + +For details on preparing the plugin development tool, see [Initializing the Development Tool](initialize-development-tools.md). + + +**Tip**: Run `dify version` in your terminal to confirm that the scaffolding tool is installed. + + +--- + +### 1. Initializing the Plugin Template + +Run the following command to create a development template for your Agent plugin: + +``` +dify plugin init +``` + +Follow the on-screen prompts and refer to the sample comments for guidance. + +```bash +➜ Dify Plugins Developing dify plugin init +Edit profile of the plugin +Plugin name (press Enter to next step): # 填写插件的名称 +Author (press Enter to next step): Author name # 填写插件作者 +Description (press Enter to next step): Description # 填写插件的描述 +--- +Select the language you want to use for plugin development, and press Enter to con +BTW, you need Python 3.12+ to develop the Plugin if you choose Python. +-> python # 选择 Python 环境 + go (not supported yet) +--- +Based on the ability you want to extend, we have divided the Plugin into four type + +- Tool: It's a tool provider, but not only limited to tools, you can implement an +- Model: Just a model provider, extending others is not allowed. +- Extension: Other times, you may only need a simple http service to extend the fu +- Agent Strategy: Implement your own logics here, just by focusing on Agent itself + +What's more, we have provided the template for you, you can choose one of them b + tool +-> agent-strategy # 选择 Agent 策略模板 + llm + text-embedding +--- +Configure the permissions of the plugin, use up and down to navigate, tab to sel +Backwards Invocation: +Tools: + Enabled: [✔] You can invoke tools inside Dify if it's enabled # 默认开启 +Models: + Enabled: [✔] You can invoke models inside Dify if it's enabled # 默认开启 + LLM: [✔] You can invoke LLM models inside Dify if it's enabled # 默认开启 + Text Embedding: [✘] You can invoke text embedding models inside Dify if it' + Rerank: [✘] You can invoke rerank models inside Dify if it's enabled +... +``` + +After initialization, you’ll get a folder containing all the resources needed for plugin development. Familiarizing yourself with the overall structure of an Agent Strategy Plugin will streamline the development process: + +```text +├── GUIDE.md # User guide and documentation +├── PRIVACY.md # Privacy policy and data handling guidelines +├── README.md # Project overview and setup instructions +├── _assets/ # Static assets directory +│ └── icon.svg # Agent strategy provider icon/logo +├── main.py # Main application entry point +├── manifest.yaml # Basic plugin configuration +├── provider/ # Provider configurations directory +│ └── basic_agent.yaml # Your agent provider settings +├── requirements.txt # Python dependencies list +└── strategies/ # Strategy implementation directory + ├── basic_agent.py # Basic agent strategy implementation + └── basic_agent.yaml # Basic agent strategy configuration +``` + +All key functionality for this plugin is in the `strategies/` directory. + +--- + +### 2. Developing the Plugin + +Agent Strategy Plugin development revolves around two files: + +- **Plugin Declaration**: `strategies/basic_agent.yaml` +- **Plugin Implementation**: `strategies/basic_agent.py` + +#### 2.1 Defining Parameters + +To build an Agent plugin, start by specifying the necessary parameters in `strategies/basic_agent.yaml`. These parameters define the plugin’s core features, such as calling an LLM or using tools. + +We recommend including the following four parameters first: + +1. **model**: The large language model to call (e.g., GPT-4, GPT-4o-mini). +2. **tools**: A list of tools that enhance your plugin’s functionality. +3. **query**: The user input or prompt content sent to the model. +4. **maximum_iterations**: The maximum iteration count to prevent excessive computation. + +Example Code: + +```yaml +identity: + name: basic_agent # the name of the agent_strategy + author: novice # the author of the agent_strategy + label: + en_US: BasicAgent # the engilish label of the agent_strategy +description: + en_US: BasicAgent # the english description of the agent_strategy +parameters: + - name: model # the name of the model parameter + type: model-selector # model-type + scope: tool-call&llm # the scope of the parameter + required: true + label: + en_US: Model + zh_Hans: 模型 + pt_BR: Model + - name: tools # the name of the tools parameter + type: array[tools] # the type of tool parameter + required: true + label: + en_US: Tools list + zh_Hans: 工具列表 + pt_BR: Tools list + - name: query # the name of the query parameter + type: string # the type of query parameter + required: true + label: + en_US: Query + zh_Hans: 查询 + pt_BR: Query + - name: maximum_iterations + type: number + required: false + default: 5 + label: + en_US: Maxium Iterations + zh_Hans: 最大迭代次数 + pt_BR: Maxium Iterations + max: 50 # if you set the max and min value, the display of the parameter will be a slider + min: 1 +extra: + python: + source: strategies/basic_agent.py +``` + +Once you’ve configured these parameters, the plugin will automatically generate a user-friendly interface so you can easily manage them: + +![Agent Strategy Plugin UI](https://assets-docs.dify.ai/2025/01/d011e2eba4c37f07a9564067ba787df8.png) + +#### 2.2 Retrieving Parameters and Execution + +After users fill out these basic fields, your plugin needs to process the submitted parameters. In `strategies/basic_agent.py`, define a parameter class for the Agent, then retrieve and apply these parameters in your logic. + +Verify incoming parameters: + +```python +from dify_plugin.entities.agent import AgentInvokeMessage +from dify_plugin.interfaces.agent import AgentModelConfig, AgentStrategy, ToolEntity +from pydantic import BaseModel + +class BasicParams(BaseModel): + maximum_iterations: int + model: AgentModelConfig + tools: list[ToolEntity] + query: str +``` + +After getting the parameters, the specific business logic is executed: + +```python +class BasicAgentAgentStrategy(AgentStrategy): + def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]: + params = BasicParams(**parameters) +``` + +### 3. Invoking the Model + +In an Agent Strategy Plugin, **invoking the model** is central to the workflow. You can invoke an LLM efficiently using `session.model.llm.invoke()` from the SDK, handling text generation, dialogue, and so forth. + +If you want the LLM **handle tools**, ensure it outputs structured parameters to match a tool’s interface. In other words, the LLM must produce input arguments that the tool can accept based on the user’s instructions. + +Construct the following parameters: + +* model +* prompt\_messages +* tools +* stop +* stream + +Example code for method definition: + +```python +def invoke( + self, + model_config: LLMModelConfig, + prompt_messages: list[PromptMessage], + tools: list[PromptMessageTool] | None = None, + stop: list[str] | None = None, + stream: bool = True, + ) -> Generator[LLMResultChunk, None, None] | LLMResult:... +``` + +To view the complete functionality implementation, please refer to the Example Code for model invocation. + +This code achieves the following functionality: after a user inputs a command, the Agent strategy plugin automatically calls the LLM, constructs the necessary parameters for tool invocation based on the generated results, and enables the model to flexibly dispatch integrated tools to efficiently complete complex tasks. + +![Request parameters for generating tools](https://assets-docs.dify.ai/2025/01/01e32c2d77150213c7c929b3cceb4dae.png) + +### 4. Handle a Tool + +After specifying the tool parameters, the Agent Strategy Plugin must actually call these tools. Use `session.tool.invoke()` to make those requests. + +Construct the following parameters: + +- provider +- tool_name +- parameters + +Example code for method definition: + +```python + def invoke( + self, + provider_type: ToolProviderType, + provider: str, + tool_name: str, + parameters: dict[str, Any], + ) -> Generator[ToolInvokeMessage, None, None]:... +``` + +If you’d like the LLM itself to generate the parameters needed for tool calls, you can do so by combining the model’s output with your tool-calling code. + +```python +tool_instances = ( + {tool.identity.name: tool for tool in params.tools} if params.tools else {} +) +for tool_call_id, tool_call_name, tool_call_args in tool_calls: + tool_instance = tool_instances[tool_call_name] + self.session.tool.invoke( + provider_type=ToolProviderType.BUILT_IN, + provider=tool_instance.identity.provider, + tool_name=tool_instance.identity.name, + parameters={**tool_instance.runtime_parameters, **tool_call_args}, + ) +``` + +With this in place, your Agent Strategy Plugin can automatically perform **Function Calling**—for instance, retrieving the current time. + +![Tool Invocation](https://assets-docs.dify.ai/2025/01/80e5de8acc2b0ed00524e490fd611ff5.png) + +### 5. Creating Logs + +Often, multiple steps are necessary to complete a complex task in an **Agent Strategy Plugin**. It’s crucial for developers to track each step’s results, analyze the decision process, and optimize strategy. Using `create_log_message` and `finish_log_message` from the SDK, you can log real-time states before and after calls, aiding in quick problem diagnosis. + +For example: +- Log a “starting model call” message before calling the model, clarifying the task’s execution progress. +- Log a “call succeeded” message once the model responds, ensuring the model’s output can be traced end to end. + +```python +model_log = self.create_log_message( + label=f"{params.model.model} Thought", + data={}, + metadata={"start_at": model_started_at, "provider": params.model.provider}, + status=ToolInvokeMessage.LogMessage.LogStatus.START, + ) +yield model_log +self.session.model.llm.invoke(...) +yield self.finish_log_message( + log=model_log, + data={ + "output": response, + "tool_name": tool_call_names, + "tool_input": tool_call_inputs, + }, + metadata={ + "started_at": model_started_at, + "finished_at": time.perf_counter(), + "elapsed_time": time.perf_counter() - model_started_at, + "provider": params.model.provider, + }, +) +``` + +When the setup is complete, the workflow log will output the execution results: + +![Agent Output execution results](https://assets-docs.dify.ai/2025/01/96516388a4fb1da9cea85fc1804ff377.png) + + +If multiple rounds of logs occur, you can structure them hierarchically by setting a `parent` parameter in your log calls, making them easier to follow. + +Reference method: + +```python +function_call_round_log = self.create_log_message( + label="Function Call Round1 ", + data={}, + metadata={}, +) +yield function_call_round_log + +model_log = self.create_log_message( + label=f"{params.model.model} Thought", + data={}, + metadata={"start_at": model_started_at, "provider": params.model.provider}, + status=ToolInvokeMessage.LogMessage.LogStatus.START, + # add parent log + parent=function_call_round_log, +) +yield model_log +``` + +#### Sample code for agent-plugin functions + + + + #### Invoke Model + +The following code demonstrates how to give the Agent strategy plugin the ability to invoke the model: + +```python +import json +from collections.abc import Generator +from typing import Any, cast + +from dify_plugin.entities.agent import AgentInvokeMessage +from dify_plugin.entities.model.llm import LLMModelConfig, LLMResult, LLMResultChunk +from dify_plugin.entities.model.message import ( + PromptMessageTool, + UserPromptMessage, +) +from dify_plugin.entities.tool import ToolInvokeMessage, ToolParameter, ToolProviderType +from dify_plugin.interfaces.agent import AgentModelConfig, AgentStrategy, ToolEntity +from pydantic import BaseModel + +class BasicParams(BaseModel): + maximum_iterations: int + model: AgentModelConfig + tools: list[ToolEntity] + query: str + +class BasicAgentAgentStrategy(AgentStrategy): + def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]: + params = BasicParams(**parameters) + chunks: Generator[LLMResultChunk, None, None] | LLMResult = ( + self.session.model.llm.invoke( + model_config=LLMModelConfig(**params.model.model_dump(mode="json")), + prompt_messages=[UserPromptMessage(content=params.query)], + tools=[ + self._convert_tool_to_prompt_message_tool(tool) + for tool in params.tools + ], + stop=params.model.completion_params.get("stop", []) + if params.model.completion_params + else [], + stream=True, + ) + ) + response = "" + tool_calls = [] + tool_instances = ( + {tool.identity.name: tool for tool in params.tools} if params.tools else {} + ) + + for chunk in chunks: + # check if there is any tool call + if self.check_tool_calls(chunk): + tool_calls = self.extract_tool_calls(chunk) + tool_call_names = ";".join([tool_call[1] for tool_call in tool_calls]) + try: + tool_call_inputs = json.dumps( + {tool_call[1]: tool_call[2] for tool_call in tool_calls}, + ensure_ascii=False, + ) + except json.JSONDecodeError: + # ensure ascii to avoid encoding error + tool_call_inputs = json.dumps( + {tool_call[1]: tool_call[2] for tool_call in tool_calls} + ) + print(tool_call_names, tool_call_inputs) + if chunk.delta.message and chunk.delta.message.content: + if isinstance(chunk.delta.message.content, list): + for content in chunk.delta.message.content: + response += content.data + print(content.data, end="", flush=True) + else: + response += str(chunk.delta.message.content) + print(str(chunk.delta.message.content), end="", flush=True) + + if chunk.delta.usage: + # usage of the model + usage = chunk.delta.usage + + yield self.create_text_message( + text=f"{response or json.dumps(tool_calls, ensure_ascii=False)}\n" + ) + result = "" + for tool_call_id, tool_call_name, tool_call_args in tool_calls: + tool_instance = tool_instances[tool_call_name] + tool_invoke_responses = self.session.tool.invoke( + provider_type=ToolProviderType.BUILT_IN, + provider=tool_instance.identity.provider, + tool_name=tool_instance.identity.name, + parameters={**tool_instance.runtime_parameters, **tool_call_args}, + ) + if not tool_instance: + tool_invoke_responses = { + "tool_call_id": tool_call_id, + "tool_call_name": tool_call_name, + "tool_response": f"there is not a tool named {tool_call_name}", + } + else: + # invoke tool + tool_invoke_responses = self.session.tool.invoke( + provider_type=ToolProviderType.BUILT_IN, + provider=tool_instance.identity.provider, + tool_name=tool_instance.identity.name, + parameters={**tool_instance.runtime_parameters, **tool_call_args}, + ) + result = "" + for tool_invoke_response in tool_invoke_responses: + if tool_invoke_response.type == ToolInvokeMessage.MessageType.TEXT: + result += cast( + ToolInvokeMessage.TextMessage, tool_invoke_response.message + ).text + elif ( + tool_invoke_response.type == ToolInvokeMessage.MessageType.LINK + ): + result += ( + f"result link: {cast(ToolInvokeMessage.TextMessage, tool_invoke_response.message).text}." + + " please tell user to check it." + ) + elif tool_invoke_response.type in { + ToolInvokeMessage.MessageType.IMAGE_LINK, + ToolInvokeMessage.MessageType.IMAGE, + }: + result += ( + "image has been created and sent to user already, " + + "you do not need to create it, just tell the user to check it now." + ) + elif ( + tool_invoke_response.type == ToolInvokeMessage.MessageType.JSON + ): + text = json.dumps( + cast( + ToolInvokeMessage.JsonMessage, + tool_invoke_response.message, + ).json_object, + ensure_ascii=False, + ) + result += f"tool response: {text}." + else: + result += f"tool response: {tool_invoke_response.message!r}." + + tool_response = { + "tool_call_id": tool_call_id, + "tool_call_name": tool_call_name, + "tool_response": result, + } + yield self.create_text_message(result) + + def _convert_tool_to_prompt_message_tool( + self, tool: ToolEntity + ) -> PromptMessageTool: + """ + convert tool to prompt message tool + """ + message_tool = PromptMessageTool( + name=tool.identity.name, + description=tool.description.llm if tool.description else "", + parameters={ + "type": "object", + "properties": {}, + "required": [], + }, + ) + + parameters = tool.parameters + for parameter in parameters: + if parameter.form != ToolParameter.ToolParameterForm.LLM: + continue + + parameter_type = parameter.type + if parameter.type in { + ToolParameter.ToolParameterType.FILE, + ToolParameter.ToolParameterType.FILES, + }: + continue + enum = [] + if parameter.type == ToolParameter.ToolParameterType.SELECT: + enum = ( + [option.value for option in parameter.options] + if parameter.options + else [] + ) + + message_tool.parameters["properties"][parameter.name] = { + "type": parameter_type, + "description": parameter.llm_description or "", + } + + if len(enum) > 0: + message_tool.parameters["properties"][parameter.name]["enum"] = enum + + if parameter.required: + message_tool.parameters["required"].append(parameter.name) + + return message_tool + + def check_tool_calls(self, llm_result_chunk: LLMResultChunk) -> bool: + """ + Check if there is any tool call in llm result chunk + """ + return bool(llm_result_chunk.delta.message.tool_calls) + + def extract_tool_calls( + self, llm_result_chunk: LLMResultChunk + ) -> list[tuple[str, str, dict[str, Any]]]: + """ + Extract tool calls from llm result chunk + + Returns: + List[Tuple[str, str, Dict[str, Any]]]: [(tool_call_id, tool_call_name, tool_call_args)] + """ + tool_calls = [] + for prompt_message in llm_result_chunk.delta.message.tool_calls: + args = {} + if prompt_message.function.arguments != "": + args = json.loads(prompt_message.function.arguments) + + tool_calls.append( + ( + prompt_message.id, + prompt_message.function.name, + args, + ) + ) + + return tool_calls +``` + + + #### Handle Tools + +The following code shows how to implement model calls for the Agent strategy plugin and send canonicalized requests to the tool. + +```python +import json +from collections.abc import Generator +from typing import Any, cast + +from dify_plugin.entities.agent import AgentInvokeMessage +from dify_plugin.entities.model.llm import LLMModelConfig, LLMResult, LLMResultChunk +from dify_plugin.entities.model.message import ( + PromptMessageTool, + UserPromptMessage, +) +from dify_plugin.entities.tool import ToolInvokeMessage, ToolParameter, ToolProviderType +from dify_plugin.interfaces.agent import AgentModelConfig, AgentStrategy, ToolEntity +from pydantic import BaseModel + +class BasicParams(BaseModel): + maximum_iterations: int + model: AgentModelConfig + tools: list[ToolEntity] + query: str + +class BasicAgentAgentStrategy(AgentStrategy): + def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]: + params = BasicParams(**parameters) + chunks: Generator[LLMResultChunk, None, None] | LLMResult = ( + self.session.model.llm.invoke( + model_config=LLMModelConfig(**params.model.model_dump(mode="json")), + prompt_messages=[UserPromptMessage(content=params.query)], + tools=[ + self._convert_tool_to_prompt_message_tool(tool) + for tool in params.tools + ], + stop=params.model.completion_params.get("stop", []) + if params.model.completion_params + else [], + stream=True, + ) + ) + response = "" + tool_calls = [] + tool_instances = ( + {tool.identity.name: tool for tool in params.tools} if params.tools else {} + ) + + for chunk in chunks: + # check if there is any tool call + if self.check_tool_calls(chunk): + tool_calls = self.extract_tool_calls(chunk) + tool_call_names = ";".join([tool_call[1] for tool_call in tool_calls]) + try: + tool_call_inputs = json.dumps( + {tool_call[1]: tool_call[2] for tool_call in tool_calls}, + ensure_ascii=False, + ) + except json.JSONDecodeError: + # ensure ascii to avoid encoding error + tool_call_inputs = json.dumps( + {tool_call[1]: tool_call[2] for tool_call in tool_calls} + ) + print(tool_call_names, tool_call_inputs) + if chunk.delta.message and chunk.delta.message.content: + if isinstance(chunk.delta.message.content, list): + for content in chunk.delta.message.content: + response += content.data + print(content.data, end="", flush=True) + else: + response += str(chunk.delta.message.content) + print(str(chunk.delta.message.content), end="", flush=True) + + if chunk.delta.usage: + # usage of the model + usage = chunk.delta.usage + + yield self.create_text_message( + text=f"{response or json.dumps(tool_calls, ensure_ascii=False)}\n" + ) + result = "" + for tool_call_id, tool_call_name, tool_call_args in tool_calls: + tool_instance = tool_instances[tool_call_name] + tool_invoke_responses = self.session.tool.invoke( + provider_type=ToolProviderType.BUILT_IN, + provider=tool_instance.identity.provider, + tool_name=tool_instance.identity.name, + parameters={**tool_instance.runtime_parameters, **tool_call_args}, + ) + if not tool_instance: + tool_invoke_responses = { + "tool_call_id": tool_call_id, + "tool_call_name": tool_call_name, + "tool_response": f"there is not a tool named {tool_call_name}", + } + else: + # invoke tool + tool_invoke_responses = self.session.tool.invoke( + provider_type=ToolProviderType.BUILT_IN, + provider=tool_instance.identity.provider, + tool_name=tool_instance.identity.name, + parameters={**tool_instance.runtime_parameters, **tool_call_args}, + ) + result = "" + for tool_invoke_response in tool_invoke_responses: + if tool_invoke_response.type == ToolInvokeMessage.MessageType.TEXT: + result += cast( + ToolInvokeMessage.TextMessage, tool_invoke_response.message + ).text + elif ( + tool_invoke_response.type == ToolInvokeMessage.MessageType.LINK + ): + result += ( + f"result link: {cast(ToolInvokeMessage.TextMessage, tool_invoke_response.message).text}." + + " please tell user to check it." + ) + elif tool_invoke_response.type in { + ToolInvokeMessage.MessageType.IMAGE_LINK, + ToolInvokeMessage.MessageType.IMAGE, + }: + result += ( + "image has been created and sent to user already, " + + "you do not need to create it, just tell the user to check it now." + ) + elif ( + tool_invoke_response.type == ToolInvokeMessage.MessageType.JSON + ): + text = json.dumps( + cast( + ToolInvokeMessage.JsonMessage, + tool_invoke_response.message, + ).json_object, + ensure_ascii=False, + ) + result += f"tool response: {text}." + else: + result += f"tool response: {tool_invoke_response.message!r}." + + tool_response = { + "tool_call_id": tool_call_id, + "tool_call_name": tool_call_name, + "tool_response": result, + } + yield self.create_text_message(result) + + def _convert_tool_to_prompt_message_tool( + self, tool: ToolEntity + ) -> PromptMessageTool: + """ + convert tool to prompt message tool + """ + message_tool = PromptMessageTool( + name=tool.identity.name, + description=tool.description.llm if tool.description else "", + parameters={ + "type": "object", + "properties": {}, + "required": [], + }, + ) + + parameters = tool.parameters + for parameter in parameters: + if parameter.form != ToolParameter.ToolParameterForm.LLM: + continue + + parameter_type = parameter.type + if parameter.type in { + ToolParameter.ToolParameterType.FILE, + ToolParameter.ToolParameterType.FILES, + }: + continue + enum = [] + if parameter.type == ToolParameter.ToolParameterType.SELECT: + enum = ( + [option.value for option in parameter.options] + if parameter.options + else [] + ) + + message_tool.parameters["properties"][parameter.name] = { + "type": parameter_type, + "description": parameter.llm_description or "", + } + + if len(enum) > 0: + message_tool.parameters["properties"][parameter.name]["enum"] = enum + + if parameter.required: + message_tool.parameters["required"].append(parameter.name) + + return message_tool + + def check_tool_calls(self, llm_result_chunk: LLMResultChunk) -> bool: + """ + Check if there is any tool call in llm result chunk + """ + return bool(llm_result_chunk.delta.message.tool_calls) + + def extract_tool_calls( + self, llm_result_chunk: LLMResultChunk + ) -> list[tuple[str, str, dict[str, Any]]]: + """ + Extract tool calls from llm result chunk + + Returns: + List[Tuple[str, str, Dict[str, Any]]]: [(tool_call_id, tool_call_name, tool_call_args)] + """ + tool_calls = [] + for prompt_message in llm_result_chunk.delta.message.tool_calls: + args = {} + if prompt_message.function.arguments != "": + args = json.loads(prompt_message.function.arguments) + + tool_calls.append( + ( + prompt_message.id, + prompt_message.function.name, + args, + ) + ) + + return tool_calls +``` + + + #### Example of a complete function code + +A complete sample plugin code that includes a **invoking model, handling tool** and a **function to output multiple rounds of logs**: + +```python +import json +import time +from collections.abc import Generator +from typing import Any, cast + +from dify_plugin.entities.agent import AgentInvokeMessage +from dify_plugin.entities.model.llm import LLMModelConfig, LLMResult, LLMResultChunk +from dify_plugin.entities.model.message import ( + PromptMessageTool, + UserPromptMessage, +) +from dify_plugin.entities.tool import ToolInvokeMessage, ToolParameter, ToolProviderType +from dify_plugin.interfaces.agent import AgentModelConfig, AgentStrategy, ToolEntity +from pydantic import BaseModel + +class BasicParams(BaseModel): + maximum_iterations: int + model: AgentModelConfig + tools: list[ToolEntity] + query: str + +class BasicAgentAgentStrategy(AgentStrategy): + def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]: + params = BasicParams(**parameters) + function_call_round_log = self.create_log_message( + label="Function Call Round1 ", + data={}, + metadata={}, + ) + yield function_call_round_log + model_started_at = time.perf_counter() + model_log = self.create_log_message( + label=f"{params.model.model} Thought", + data={}, + metadata={"start_at": model_started_at, "provider": params.model.provider}, + status=ToolInvokeMessage.LogMessage.LogStatus.START, + parent=function_call_round_log, + ) + yield model_log + chunks: Generator[LLMResultChunk, None, None] | LLMResult = ( + self.session.model.llm.invoke( + model_config=LLMModelConfig(**params.model.model_dump(mode="json")), + prompt_messages=[UserPromptMessage(content=params.query)], + tools=[ + self._convert_tool_to_prompt_message_tool(tool) + for tool in params.tools + ], + stop=params.model.completion_params.get("stop", []) + if params.model.completion_params + else [], + stream=True, + ) + ) + response = "" + tool_calls = [] + tool_instances = ( + {tool.identity.name: tool for tool in params.tools} if params.tools else {} + ) + tool_call_names = "" + tool_call_inputs = "" + for chunk in chunks: + # check if there is any tool call + if self.check_tool_calls(chunk): + tool_calls = self.extract_tool_calls(chunk) + tool_call_names = ";".join([tool_call[1] for tool_call in tool_calls]) + try: + tool_call_inputs = json.dumps( + {tool_call[1]: tool_call[2] for tool_call in tool_calls}, + ensure_ascii=False, + ) + except json.JSONDecodeError: + # ensure ascii to avoid encoding error + tool_call_inputs = json.dumps( + {tool_call[1]: tool_call[2] for tool_call in tool_calls} + ) + print(tool_call_names, tool_call_inputs) + if chunk.delta.message and chunk.delta.message.content: + if isinstance(chunk.delta.message.content, list): + for content in chunk.delta.message.content: + response += content.data + print(content.data, end="", flush=True) + else: + response += str(chunk.delta.message.content) + print(str(chunk.delta.message.content), end="", flush=True) + + if chunk.delta.usage: + # usage of the model + usage = chunk.delta.usage + + yield self.finish_log_message( + log=model_log, + data={ + "output": response, + "tool_name": tool_call_names, + "tool_input": tool_call_inputs, + }, + metadata={ + "started_at": model_started_at, + "finished_at": time.perf_counter(), + "elapsed_time": time.perf_counter() - model_started_at, + "provider": params.model.provider, + }, + ) + yield self.create_text_message( + text=f"{response or json.dumps(tool_calls, ensure_ascii=False)}\n" + ) + result = "" + for tool_call_id, tool_call_name, tool_call_args in tool_calls: + tool_instance = tool_instances[tool_call_name] + tool_invoke_responses = self.session.tool.invoke( + provider_type=ToolProviderType.BUILT_IN, + provider=tool_instance.identity.provider, + tool_name=tool_instance.identity.name, + parameters={**tool_instance.runtime_parameters, **tool_call_args}, + ) + if not tool_instance: + tool_invoke_responses = { + "tool_call_id": tool_call_id, + "tool_call_name": tool_call_name, + "tool_response": f"there is not a tool named {tool_call_name}", + } + else: + # invoke tool + tool_invoke_responses = self.session.tool.invoke( + provider_type=ToolProviderType.BUILT_IN, + provider=tool_instance.identity.provider, + tool_name=tool_instance.identity.name, + parameters={**tool_instance.runtime_parameters, **tool_call_args}, + ) + result = "" + for tool_invoke_response in tool_invoke_responses: + if tool_invoke_response.type == ToolInvokeMessage.MessageType.TEXT: + result += cast( + ToolInvokeMessage.TextMessage, tool_invoke_response.message + ).text + elif ( + tool_invoke_response.type == ToolInvokeMessage.MessageType.LINK + ): + result += ( + f"result link: {cast(ToolInvokeMessage.TextMessage, tool_invoke_response.message).text}." + + " please tell user to check it." + ) + elif tool_invoke_response.type in { + ToolInvokeMessage.MessageType.IMAGE_LINK, + ToolInvokeMessage.MessageType.IMAGE, + }: + result += ( + "image has been created and sent to user already, " + + "you do not need to create it, just tell the user to check it now." + ) + elif ( + tool_invoke_response.type == ToolInvokeMessage.MessageType.JSON + ): + text = json.dumps( + cast( + ToolInvokeMessage.JsonMessage, + tool_invoke_response.message, + ).json_object, + ensure_ascii=False, + ) + result += f"tool response: {text}." + else: + result += f"tool response: {tool_invoke_response.message!r}." + + tool_response = { + "tool_call_id": tool_call_id, + "tool_call_name": tool_call_name, + "tool_response": result, + } + yield self.create_text_message(result) + + def _convert_tool_to_prompt_message_tool( + self, tool: ToolEntity + ) -> PromptMessageTool: + """ + convert tool to prompt message tool + """ + message_tool = PromptMessageTool( + name=tool.identity.name, + description=tool.description.llm if tool.description else "", + parameters={ + "type": "object", + "properties": {}, + "required": [], + }, + ) + + parameters = tool.parameters + for parameter in parameters: + if parameter.form != ToolParameter.ToolParameterForm.LLM: + continue + + parameter_type = parameter.type + if parameter.type in { + ToolParameter.ToolParameterType.FILE, + ToolParameter.ToolParameterType.FILES, + }: + continue + enum = [] + if parameter.type == ToolParameter.ToolParameterType.SELECT: + enum = ( + [option.value for option in parameter.options] + if parameter.options + else [] + ) + + message_tool.parameters["properties"][parameter.name] = { + "type": parameter_type, + "description": parameter.llm_description or "", + } + + if len(enum) > 0: + message_tool.parameters["properties"][parameter.name]["enum"] = enum + + if parameter.required: + message_tool.parameters["required"].append(parameter.name) + + return message_tool + + def check_tool_calls(self, llm_result_chunk: LLMResultChunk) -> bool: + """ + Check if there is any tool call in llm result chunk + """ + return bool(llm_result_chunk.delta.message.tool_calls) + + def extract_tool_calls( + self, llm_result_chunk: LLMResultChunk + ) -> list[tuple[str, str, dict[str, Any]]]: + """ + Extract tool calls from llm result chunk + + Returns: + List[Tuple[str, str, Dict[str, Any]]]: [(tool_call_id, tool_call_name, tool_call_args)] + """ + tool_calls = [] + for prompt_message in llm_result_chunk.delta.message.tool_calls: + args = {} + if prompt_message.function.arguments != "": + args = json.loads(prompt_message.function.arguments) + + tool_calls.append( + ( + prompt_message.id, + prompt_message.function.name, + args, + ) + ) + + return tool_calls +``` + + + +### 3. Debugging the Plugin + +After finalizing the plugin’s declaration file and implementation code, run `python -m main` in the plugin directory to restart it. Next, confirm the plugin runs correctly. Dify offers remote debugging—go to [“Plugin Management”](https://console-plugin.dify.dev/plugins) to obtain your debug key and remote server address. + +![](https://assets-docs.dify.ai/2024/12/053415ef127f1f4d6dd85dd3ae79626a.png) + +Back in your plugin project, copy `.env.example` to `.env` and insert the relevant remote server and debug key info. + +```bash +INSTALL_METHOD=remote +REMOTE_INSTALL_HOST=remote +REMOTE_INSTALL_PORT=5003 +REMOTE_INSTALL_KEY=****-****-****-****-**** +``` + + +Then run: + +```bash +python -m main +``` + +You’ll see the plugin installed in your Workspace, and team members can also access it. + +![Browser Plugins](https://assets-docs.dify.ai/2025/01/c82ec0202e5bf914b36e06c796398dd6.png) + +### Packaging the Plugin (Optional) + +Once everything works, you can package your plugin by running: + +```bash +# Replace ./basic_agent/ with your actual plugin project path. + +dify plugin package ./basic_agent/ +``` + +A file named `google.difypkg` (for example) appears in your current folder—this is your final plugin package. + +**Congratulations!** You’ve fully developed, tested, and packaged your Agent Strategy Plugin. + +### Publishing the Plugin (Optional) + +You can now upload it to the [Dify Plugins repository](https://github.com/langgenius/dify-plugins). Before doing so, ensure it meets the [Plugin Publishing Guidelines](https://docs.dify.ai/plugins/publish-plugins/publish-to-dify-marketplace). Once approved, your code merges into the main branch, and the plugin automatically goes live on the [Dify Marketplace](https://marketplace.dify.ai/). + +--- + +### Further Exploration + +Complex tasks often need multiple rounds of thinking and tool calls, typically repeating **model invoke → tool use** until the task ends or a maximum iteration limit is reached. Managing prompts effectively is crucial in this process. Check out the [complete Function Calling implementation](https://github.com/langgenius/dify-official-plugins/blob/main/agent-strategies/cot_agent/strategies/function_calling.py) for a standardized approach to letting models call external tools and handle their outputs. diff --git a/en/plugins/quick-start/develop-plugins/bundle.mdx b/en/plugins/quick-start/develop-plugins/bundle.mdx new file mode 100644 index 00000000..56b34a36 --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/bundle.mdx @@ -0,0 +1,81 @@ +--- +title: Bundle +--- + + +A Bundle plugin package is a collection of multiple plugins. It can package multiple plugins into one plugin to achieve batch installation effects while providing more powerful services. + +You can package multiple plugins into a Bundle using the Dify CLI tool. Bundle plugin packages offer three types: + +* `Marketplace` type: Stores plugin IDs and version information. During import, specific plugin packages are downloaded through the Dify Marketplace. +* `GitHub` type: Stores GitHub repository address, release version number, and asset filename. During import, Dify accesses the corresponding GitHub repository to download plugin packages. +* `Package` type: Plugin packages are stored directly in the Bundle. It doesn't store reference sources but may cause large Bundle package sizes. + +### **Prerequisites** + +* Dify plugin scaffolding tool +* Python environment, version ≥ 3.10; + +For detailed instructions on preparing the plugin development scaffolding tool, please refer to [Initializing Development Tools](initialize-development-tools.md). + +### **Create Bundle Project** + +In the current path, run the scaffolding command-line tool to create a new plugin package project: + +```bash +./dify-plugin-darwin-arm64 bundle init +``` + +#### **1. Enter Plugin Information** + +Follow the prompts to configure plugin name, author information, and plugin description. If you're working in a team, you can also enter an organization name as the author. + +> The name must be 1-128 characters long and can only contain letters, numbers, hyphens, and underscores. + +![Bundle basic informatio](https://assets-docs.dify.ai/2024/12/03a1c4cdc72213f09523eb1b40832279.png) + +Fill in the information and hit enter, the Bundle plugin project directory will be created automatically. + +![](https://assets-docs.dify.ai/2024/12/356d1a8201fac3759bf01ee64e79a52b.png) + +#### **2. Add Dependencies** + +* **Marketplace** + +Execute the following command: + +```bash +dify-plugin bundle append marketplace . --marketplace_pattern=langgenius/openai:0.0.1 +``` + +Where marketplace\_pattern is the plugin reference in the marketplace, format: organization-name/plugin-name:version + +* **Github** + +Execute the following command: + +```bash +dify-plugin bundle append github . --repo_pattern=langgenius/openai:0.0.1/openai.difypkg +``` + +Where repo\_pattern is the plugin reference in github, format: `organization-name/repository-name:release/attachment-name` + +* **Package** + +Execute the following command: + +```bash +dify-plugin bundle append package . --package_path=./openai.difypkg +``` + +Where package\_path is the plugin package directory. + +### **Package Bundle Project** + +Run the following command to package the Bundle plugin: + +```bash +dify-plugin bundle package ./bundle +``` + +After executing the command, a bundle.difybndl file will be automatically created in the current directory, which is the final packaging result. diff --git a/en/plugins/quick-start/develop-plugins/extension-plugin.mdx b/en/plugins/quick-start/develop-plugins/extension-plugin.mdx new file mode 100644 index 00000000..0d899c2e --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/extension-plugin.mdx @@ -0,0 +1,273 @@ +--- +title: Extension Plugin +--- + + +This guide will help you quickly develop an Extension type plugin and understand the basic plugin development process. + +### **Prerequisites** + +* Dify plugin scaffolding tool +* Python environment, version ≥ 3.12 + +For detailed instructions on preparing the plugin development scaffolding tool, please refer to [Initializing Development Tools](initialize-development-tools.md). + +### **Create New Project** + +In the current path, run the CLI tool to create a new dify plugin project: + +```bash +./dify-plugin-darwin-arm64 plugin init +``` + +If you have renamed the binary file to `dify` and copied it to the `/usr/local/bin` path, you can run the following command to create a new plugin project: + +```bash +dify plugin init +``` + +### **Fill Plugin Information** + +Follow the prompts to configure the plugin name, author information, and plugin description. If you're working in a team, you can also enter an organization name as the author. + +> The plugin name must be 1-128 characters long and can only contain letters, numbers, hyphens, and underscores. + +![Plugins detail](https://assets-docs.dify.ai/2024/12/75cfccb11fe31c56c16429b3998f2eb0.png) + +Once filled out, select Python in the Plugin Development Language section. + +![Plugins development: Python](https://assets-docs.dify.ai/2024/11/1129101623ac4c091a3f6f75f4103848.png) + +### **3. Select Plugin Type and Initialize Project Template** + +All templates in the scaffolding tool provide complete code projects. For demonstration purposes, this guide will use the `Extension` type plugin template as an example. For developers already familiar with plugin development, templates are not necessary, and you can refer to the interface documentation to complete different types of plugin development. + +![Extension](https://assets-docs.dify.ai/2024/11/ff08f77b928494e10197b456fc4e2d5b.png) + +#### **Configure Plugin Permissions** + +The plugin needs permissions to access the Dify main platform for proper connection. The following permissions need to be granted for this example plugin: + +* Tools +* LLMs +* Apps +* Enable persistent Storage with default size allocation +* Allow Endpoint registration + +> Use arrow keys in the terminal to select permissions, and use the "Tab" key to grant permissions. + +After checking all permission items, press Enter to complete the plugin creation. The system will automatically generate the plugin project code. + +![Plugins permissions](https://assets-docs.dify.ai/2024/11/5518ca1e425a7135f18f499e55d16bdd.png) + +The base file structure of the plugin contains the following: + +``` +. +├── GUIDE.md +├── README.md +├── _assets +│ └── icon.svg +├── endpoints +│ ├── your-project.py +│ └── your-project.yaml +├── group +│ └── your-project.yaml +├── main.py +├── manifest.yaml +└── requirements.txt +``` + +* `GUIDE.md`: A brief tutorial guide that leads you through the plugin writing process. +* `README.md`: Basic introduction about the current plugin. You need to fill this file with information about the plugin and its usage instructions. +* `_assets`: Stores all multimedia files related to the current plugin. +* `endpoints`: An `Extension` type plugin template created following the CLI guidance, this directory contains all implementation code for Endpoint functionality. +* `group`: Specifies key types, multilingual settings, and API definition file paths. +* `main.py`: The entry file for the entire project. +* `manifest.yaml`: The basic configuration file for the entire plugin, containing information such as required permissions and extension type. +* `requirements.txt`: Contains Python environment dependencies. + +### Developing Plugins + +#### **1. Define Plugin's Request Endpoint** + +Edit `endpoints/test_plugin.yaml`, modifying it according to the following code: + +```yaml +path: "/neko" +method: "GET" +extra: + python: + source: "endpoints/test_plugin.py" +``` + +This code defines the plugin's entry path as `/neko`, with a GET request method. The plugin's functionality implementation code is in the `endpoints/test_plugin.py` file. + +#### **2. Write Plugin Functionality** + +Plugin functionality: Request the plugin service to output a cat. + +Write the plugin's implementation code in the `endpoints/test_plugin.py` file, referring to the following example code: + +```python +from typing import Mapping +from werkzeug import Request, Response +from flask import Flask, render_template_string +from dify_plugin import Endpoint + +app = Flask(__name__) + +class NekoEndpoint(Endpoint): + def _invoke(self, r: Request, values: Mapping, settings: Mapping) -> Response: + ascii_art = ''' +⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛⬛️⬜️⬜️⬜️⬜️⬜⬜️⬜️️ +🟥🟥⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️🟥🟥🟥🟥🟥🟥🟥🟥⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬛🥧🥧🥧🥧🥧🥧🥧🥧🥧🥧🥧🥧🥧🥧🥧🥧🥧⬛️⬜️⬜️⬜️⬜️⬜⬜️️ +🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥⬛️🥧🥧🥧💟💟💟💟💟💟💟💟💟💟💟💟💟🥧🥧🥧⬛️⬜️⬜️⬜️⬜⬜️️ +🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥⬛️🥧🥧💟💟💟💟💟💟🍓💟💟🍓💟💟💟💟💟🥧🥧⬛️⬜️⬜️⬜️⬜️⬜️️ +🟧🟧🟥🟥🟥🟥🟥🟥🟥🟥🟧🟧🟧🟧🟧🟧🟧🟧🟥🟥🟥🟥🟥🟥🟥⬛🥧💟💟🍓💟💟💟💟💟💟💟💟💟💟💟💟💟💟🥧⬛️⬜️⬜️⬜️⬜⬜️️ +🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧⬛️🥧💟💟💟💟💟💟💟💟💟💟⬛️⬛️💟💟🍓💟💟🥧⬛️⬜️⬛️️⬛️️⬜⬜️️ +🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧🟧⬛️🥧💟💟💟💟💟💟💟💟💟⬛️🌫🌫⬛💟💟💟💟🥧⬛️⬛️🌫🌫⬛⬜️️ +🟨🟨🟧🟧🟧🟧🟧🟧🟧🟧🟨🟨🟨🟨🟨🟨🟨🟨🟧⬛️⬛️⬛️⬛️🟧🟧⬛️🥧💟💟💟💟💟💟🍓💟💟⬛️🌫🌫🌫⬛💟💟💟🥧⬛️🌫🌫🌫⬛⬜️️ +🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨⬛️🌫🌫⬛️⬛️🟧⬛️🥧💟💟💟💟💟💟💟💟💟⬛️🌫🌫🌫🌫⬛️⬛️⬛️⬛️🌫🌫🌫🌫⬛⬜️️ +🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨🟨⬛️⬛️🌫🌫⬛️⬛️⬛️🥧💟💟💟🍓💟💟💟💟💟⬛️🌫🌫🌫🌫🌫🌫🌫🌫🌫🌫🌫🌫⬛⬜️️ +🟩🟩🟨🟨🟨🟨🟨🟨🟨🟨🟩🟩🟩🟩🟩🟩🟩🟩🟨🟨⬛⬛️🌫🌫⬛️⬛️🥧💟💟💟💟💟💟💟🍓⬛️🌫🌫🌫🌫🌫🌫🌫🌫🌫🌫🌫🌫🌫🌫⬛️ +🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩⬛️⬛️🌫🌫⬛️🥧💟🍓💟💟💟💟💟💟⬛️🌫🌫🌫⬜️⬛️🌫🌫🌫🌫🌫⬜️⬛️🌫🌫⬛️ +️🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩⬛️⬛️⬛️⬛️🥧💟💟💟💟💟💟💟💟⬛️🌫🌫🌫⬛️⬛️🌫🌫🌫⬛️🌫⬛️⬛️🌫🌫⬛️ +🟦🟦🟩🟩🟩🟩🟩🟩🟩🟩🟦🟦🟦🟦🟦🟦🟦🟦🟩🟩🟩🟩🟩🟩⬛️⬛️🥧💟💟💟💟💟🍓💟💟⬛🌫🟥🟥🌫🌫🌫🌫🌫🌫🌫🌫🌫🟥🟥⬛️ +🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦⬛️🥧🥧💟🍓💟💟💟💟💟⬛️🌫🟥🟥🌫⬛️🌫🌫⬛️🌫🌫⬛️🌫🟥🟥⬛️ +🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦⬛️🥧🥧🥧💟💟💟💟💟💟💟⬛️🌫🌫🌫⬛️⬛️⬛️⬛️⬛️⬛️⬛️🌫🌫⬛️⬜️ +🟪🟪🟦🟦🟦🟦🟦🟦🟦🟦🟪🟪🟪🟪🟪🟪🟪🟪🟦🟦🟦🟦🟦🟦⬛️⬛️⬛️🥧🥧🥧🥧🥧🥧🥧🥧🥧🥧⬛️🌫🌫🌫🌫🌫🌫🌫🌫🌫🌫⬛️⬜️⬜️ +🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪⬛️🌫🌫🌫⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬛️⬜️⬜️⬜️ +🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪⬛️🌫🌫⬛️⬛️⬜️⬛️🌫🌫⬛️⬜️⬜️⬜️⬜️⬜️⬛️🌫🌫⬛️⬜️⬛️🌫🌫⬛️⬜️⬜️⬜️⬜️ +⬜️⬜️🟪🟪🟪🟪🟪🟪🟪🟪⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️🟪🟪🟪🟪🟪⬛️⬛️⬛️⬛⬜️⬜️⬛️⬛️⬛️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬛️⬛️⬛️⬜️⬜️⬛️⬛️⬜️⬜️⬜️⬜️⬜️️ + ''' + ascii_art_lines = ascii_art.strip().split('\n') + with app.app_context(): + return Response(render_template_string(''' + + + + + + +
+ + + + ''', ascii_art_lines=ascii_art_lines), status=200, content_type="text/html") +``` + +The following Python dependencies need to be installed first to run this code: + +```python +pip install werkzeug +pip install flask +pip install dify-plugin +``` + +### Debugging Plugins + +Dify provides remote debugging method, go to "Plugin Management" page to get the debugging key and remote server address. + +![](https://assets-docs.dify.ai/2024/12/053415ef127f1f4d6dd85dd3ae79626a.png) + +Go back to the plugin project, copy the `.env.example` file and rename it to .env. Fill it with the remote server address and debugging key. + +The `.env` file: + +```bash +INSTALL_METHOD=remote +REMOTE_INSTALL_HOST=remote +REMOTE_INSTALL_PORT=5003 +REMOTE_INSTALL_KEY=****-****-****-****-**** +``` + +Run the `python -m main` command to launch the plugin. You can see on the plugin page that the plugin has been installed into Workspace. Other team members can also access the plugin. + +![](https://assets-docs.dify.ai/2024/11/0fe19a8386b1234755395018bc2e0e35.png) + +### Packing Plugin + +After confirming that the plugin works properly, you can package and name the plugin with the following command line tool. After running it you can find the `neko.difypkg` file in the current folder, which is the final plugin package. + +```bash +# Replace ./neko with your actual plugin project path. + +dify plugin package ./neko +``` + +Congratulations, you have completed the complete development, debugging and packaging process of a tool type plugin! + +### Publishing Plugins + +You can now publish your plugin by uploading it to the [Dify Plugins code repository](https://github.com/langgenius/dify-plugins)! Before uploading, make sure your plugin follows the [plugin release guide](../publish-plugins/publish-to-dify-marketplace.md). Once approved, the code will be merged into the master branch and automatically live in the [Dify Marketplace](https://marketplace.dify.ai/). + +#### Exploring More + +**Quick Start:** + +* [Develop Extension Type Plugin](extension-plugin.md) +* [Develop Model Type Plugin](model-plugin/) +* [Bundle Type Plugin: Package Multiple Plugins](bundle.md) + +**Plugins Specification Definition Documentaiton:** + +* [Minifest](../schema-definition/manifest.md) +* [Endpoint](../schema-definition/endpoint.md) +* [Reverse Invocation of the Dify Service](../schema-definition/reverse-invocation-of-the-dify-service/) +* [Tools](../../guides/tools/) +* [Models](../schema-definition/model/model-schema.md) +* [Extend Agent Strategy](../schema-definition/agent.md) + +**Best Practices:** + +[Develop a Slack Bot Plugin](../../best-practice/develop-a-slack-bot-plugin.md) diff --git a/en/plugins/quick-start/develop-plugins/initialize-development-tools.mdx b/en/plugins/quick-start/develop-plugins/initialize-development-tools.mdx new file mode 100644 index 00000000..e2d088b3 --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/initialize-development-tools.mdx @@ -0,0 +1,73 @@ +--- +title: Initialize Development Tools +--- + + +Before start to develop Dify plugins please prepare the following prerequisites: + +* Dify plugin scaffolding tool +* Python environment, version ≥ 3.12 + +> The Dify plugin development scaffolding tool, also known as `dify-plugin-daemon`, can be regarded as a plugin development SDK. + +### **1. Installing the Dify Plugin Development Scaffolding Tool** + +Visit the [Dify plugin GitHub page](https://github.com/langgenius/dify-plugin-daemon/releases) and select and download the version suitable for your operating system. + +Using **macOS with M-series chips** as an example: Download the `dify-plugin-darwin-arm64` file from the project address mentioned above. Then, in the terminal, navigate to the file's location and grant it execution permissions: + +``` +chmod +x dify-plugin-darwin-arm64 +``` + +Run the following command to verify successful installation. + +``` +./dify-plugin-darwin-arm64 version +``` + +> If the system shows an "Apple cannot verify" error, go to **Settings → Privacy & Security → Security**, and click the "Open Anyway" button. + +After running the command, the installation is successful, if the terminal returns version information like `v0.0.1-beta.15`. + + +**Tips:** + +If you want to use the `dify` command globally in your system to run the scaffolding tool, it's recommended to rename the binary file to `dify` and copy it to the `/usr/local/bin` system path. + +After configuration, entering the `dify version` command in the terminal will output the version number. + + + + +### **2. Initialize Python Environment** + +For detailed instructions, please refer to the [Python installation](https://pythontest.com/python/installing-python-3-11/) tutorial. Python version 3.12 or higher is required. + +### 3. **Develop plugins** + +Please refer to the following content for examples of different types of plugin development. + + + tool-plugin.md + + + + model-plugin + + + + agent-strategy-plugin.md + + + + extension-plugin.md + + + + bundle.md + diff --git a/en/plugins/quick-start/develop-plugins/model-plugin/README.mdx b/en/plugins/quick-start/develop-plugins/model-plugin/README.mdx new file mode 100644 index 00000000..a509c7e9 --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/model-plugin/README.mdx @@ -0,0 +1,71 @@ +--- +title: Model Plugin +--- + + +Model type plugins enable the Dify platform to request models from specific model providers. For example, after installing the OpenAI model plugin, the Dify platform can request models like GPT-4, GPT-4o-2024-05-13, etc., provided by OpenAI. + +### **Model Plugin Structure** + +To better understand the concepts involved in developing plugin models, here's an example structure within model type plugins: + +* Model Provider: Large model development companies, such as OpenAI, Anthropic, Google, etc. +* Model Categories: Depending on the provider, categories include Large Language Models (LLM), Text Embedding models, Speech-to-Text models, etc. +* Specific Models: `claude-3-5-sonnet`, `gpt-4-turbo`, etc. + +Code structure in plugin projects: + +```bash +- Model Provider + - Model Category + - Specific Models +``` + +Taking Anthropic as an example, the model plugin structure looks like this: + +```bash +- Anthropic + - llm + claude-3-5-sonnet-20240620 + claude-3-haiku-20240307 + claude-3-opus-20240229 + claude-3-sonnet-20240229 + claude-instant-1.2 + claude-instant-1 +``` + +Taking OpenAI as an example, which supports multiple model types: + +```bash +├── models +│ ├── llm +│ │ ├── chatgpt-4o-latest +│ │ ├── gpt-3.5-turbo +│ │ ├── gpt-4-0125-preview +│ │ ├── gpt-4-turbo +│ │ ├── gpt-4o +│ │ ├── llm +│ │ ├── o1-preview +│ │ └── text-davinci-003 +│ ├── moderation +│ │ ├── moderation +│ │ └── text-moderation-stable +│ ├── speech2text +│ │ ├── speech2text +│ │ └── whisper-1 +│ ├── text_embedding +│ │ ├── text-embedding-3-large +│ │ └── text_embedding +│ └── tts +│ ├── tts-1-hd +│ ├── tts-1 +│ └── tts +``` + +### **Getting Started with Creating Model Plugins** + +Please follow these steps to create a model plugin, click the document titles for specific creation guides: + +1. [Create Model Provider](create-model-providers.md) +2. Integrate [Predefined](../../../guides/model-configuration/predefined-model.md)/[Custom](../../../guides/model-configuration/customizable-model.md) Models +3. [Debug Plugin](../../debug-plugin.md) diff --git a/en/plugins/quick-start/develop-plugins/model-plugin/create-model-providers.mdx b/en/plugins/quick-start/develop-plugins/model-plugin/create-model-providers.mdx new file mode 100644 index 00000000..fc7c95e2 --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/model-plugin/create-model-providers.mdx @@ -0,0 +1,228 @@ +--- +title: Create Model Providers +--- + + +Creating a Model Type Plugin The first step in creating a Model type plugin is to initialize the plugin project and create the model provider file, followed by integrating specific predefined/custom models. + +### **Prerequisites** + +* Dify plugin scaffolding tool +* Python environment, version ≥ 3.12 + +For detailed instructions on preparing the plugin development scaffolding tool, please refer to [Initializing Development Tools](../initialize-development-tools.md). + +### **Create New Project** + +In the current path, run the CLI tool to create a new dify plugin project: + +```bash +./dify-plugin-darwin-arm64 plugin init +``` + +If you have renamed the binary file to `dify` and copied it to the `/usr/local/bin` path, you can run the following command to create a new plugin project: + +```bash +dify plugin init +``` + +### **Choose Model Plugin Template** + +Plugins are divided into three types: tools, models, and extensions. All templates in the scaffolding tool provide complete code projects. This example will use an `LLM` type plugin. + +![Plugin type: llm](https://assets-docs.dify.ai/2024/12/8efe646e9174164b9edbf658b5934b86.png) + +#### **Configure Plugin Permissions** + +Configure the following permissions for this LLM plugin: + +* Models +* LLM +* Storage + +![Model Plugin Permission](https://assets-docs.dify.ai/2024/12/10f3b3ee6c03a1215309f13d712455d4.png) + +#### **Model Type Configuration** + +Model providers support three configuration methods: + +1. **predefined-model**: Common large model types, only requiring unified provider credentials to use predefined models under the provider. For example, OpenAI provider offers a series of predefined models like gpt-3.5-turbo-0125 and gpt-4o-2024-05-13. For detailed development instructions, refer to Integrating Predefined Models. +2. **customizable-model**: You need to manually add credential configurations for each model. For example, Xinference supports both LLM and Text Embedding, but each model has a unique model\_uid. To integrate both, you need to configure a model\_uid for each model. For detailed development instructions, refer to Integrating Custom Models. + +These configuration methods can coexist, meaning a provider can support predefined-model + customizable-model or predefined-model + fetch-from-remote combinations. + +### **Adding a New Model Provider** + +Here are the main steps to add a new model provider: + +1. **Create Model Provider Configuration YAML File** + + Add a YAML file in the provider directory to describe the provider's basic information and parameter configuration. Write content according to ProviderSchema requirements to ensure consistency with system specifications. +2. **Write Model Provider Code** + + Create provider class code, implementing a Python class that meets system interface requirements for connecting with the provider's API and implementing core functionality. + +*** + +Here are the full details of how to do each step. + +#### **1. Create Model Provider Configuration File** + +Manifest is a YAML format file that declares the model provider's basic information, supported model types, configuration methods, and credential rules. The plugin project template will automatically generate configuration files under the `/providers` path. + +Here's an example of the `anthropic.yaml` configuration file for `Anthropic`: + +```yaml +provider: anthropic +label: + en_US: Anthropic +description: + en_US: Anthropic's powerful models, such as Claude 3. +icon_small: + en_US: icon_s_en.svg +icon_large: + en_US: icon_l_en.svg +background: "#F0F0EB" +help: + title: + en_US: Get your API Key from Anthropic + url: + en_US: https://console.anthropic.com/account/keys +supported_model_types: + - llm +configurate_methods: + - predefined-model +provider_credential_schema: + credential_form_schemas: + - variable: anthropic_api_key + label: + en_US: API Key + type: secret-input + required: true + placeholder: + en_US: Enter your API Key + - variable: anthropic_api_url + label: + en_US: API URL + type: text-input + required: false + placeholder: + en_US: Enter your API URL +models: + llm: + predefined: + - "models/llm/*.yaml" + position: "models/llm/_position.yaml" +extra: + python: + provider_source: provider/anthropic.py + model_sources: + - "models/llm/llm.py" +``` + +If the accessing vendor provides a custom model, such as `OpenAI` provides a fine-tuned model, you need to add the `model_credential_schema` field. + +The following is sample code for the `OpenAI` family of models: + +```yaml +model_credential_schema: +model: + label: + en_US: Model Name + placeholder: + en_US: Enter your model name + credential_form_schemas: + - variable: openai_api_key + label: + en_US: API Key + type: secret-input + required: true + placeholder: + en_US: Enter your API Key + - variable: openai_organization + label: + en_US: Organization + type: text-input + required: false + placeholder: + en_US: Enter your Organization ID + - variable: openai_api_base + label: + en_US: API Base + type: text-input + required: false + placeholder: + en_US: Enter your API Base +``` + +For a more complete look at the Model Provider YAML specification, see [Schema](../../schema-definition/) for details. + +#### 2. **Write model provider code** + +Create a python file with the same name, e.g. `anthropic.py`, in the `/providers` folder and implement a `class` that inherits from the `__base.provider.Provider` base class, e.g. `AnthropicProvider`. The following is the `Anthropic` sample code: + +```python +import logging +from dify_plugin.entities.model import ModelType +from dify_plugin.errors.model import CredentialsValidateFailedError +from dify_plugin import ModelProvider + +logger = logging.getLogger(__name__) + + +class AnthropicProvider(ModelProvider): + def validate_provider_credentials(self, credentials: dict) -> None: + """ + Validate provider credentials + + if validate failed, raise exception + + :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. + """ + try: + model_instance = self.get_model_instance(ModelType.LLM) + model_instance.validate_credentials(model="claude-3-opus-20240229", credentials=credentials) + except CredentialsValidateFailedError as ex: + raise ex + except Exception as ex: + logger.exception(f"{self.get_provider_schema().provider} credentials validate failed") + raise ex +``` + +Vendors need to inherit the `__base.model_provider.ModelProvider` base class and implement the `validate_provider_credentials` vendor uniform credentials validation method, see AnthropicProvider. + +```python +def validate_provider_credentials(self, credentials: dict) -> None: + """ + Validate provider credentials + You can choose any validate_credentials method of model type or implement validate method by yourself, + such as: get model list api + + if validate failed, raise exception + + :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. + """ +``` + +Of course, it is also possible to reserve the `validate_provider_credentials` implementation first and reuse it directly after the model credentials verification method is implemented. For other types of model providers, please refer to the following configuration methods. + +#### **Custom Model Providers** + +For custom model providers like `Xinference`, you can skip the full implementation step. Simply create an empty class called `XinferenceProvider` and implement an empty `validate_provider_credentials` method in it. + +**Detailed Explanation:** + +• `XinferenceProvider` is a placeholder class used to identify custom model providers. + +• While the `validate_provider_credentials` method won't be actually called, it must exist because its parent class is abstract and requires all child classes to implement this method. By providing an empty implementation, we can avoid instantiation errors that would occur from not implementing the abstract method. + +```python +class XinferenceProvider(Provider): + def validate_provider_credentials(self, credentials: dict) -> None: + pass +``` + +After initializing the model provider, the next step is to integrate specific llm models provided by the provider. For detailed instructions, please refer to: + +* [Develop Predefined Models](../../../../guides/model-configuration/predefined-model.md) +* [Develop Custom Models](../../../../guides/model-configuration/customizable-model.md) diff --git a/en/plugins/quick-start/develop-plugins/model-plugin/customizable-model.mdx b/en/plugins/quick-start/develop-plugins/model-plugin/customizable-model.mdx new file mode 100644 index 00000000..3e5125b2 --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/model-plugin/customizable-model.mdx @@ -0,0 +1,342 @@ +--- +title: Integrate the Customizable Model +--- + + +A **custom model** refers to an LLM that you deploy or configure on your own. This document uses the [Xinference model](https://inference.readthedocs.io/en/latest/) as an example to demonstrate how to integrate a custom model into your **model plugin**. + +By default, a custom model automatically includes two parameters—its **model type** and **model name**—and does not require additional definitions in the provider YAML file. + +You do not need to implement `validate_provider_credential` in your provider configuration file. During runtime, based on the user’s choice of model type or model name, Dify automatically calls the corresponding model layer’s `validate_credentials` method to verify credentials. + +## Integrating a Custom Model Plugin + +Below are the steps to integrate a custom model: + +1. **Create a Model Provider File**\ + Identify the model types your custom model will include. +2. **Create Code Files by Model Type**\ + Depending on the model’s type (e.g., `llm` or `text_embedding`), create separate code files. Ensure that each model type is organized into distinct logical layers for easier maintenance and future expansion. +3. **Develop the Model Invocation Logic**\ + Within each model-type module, create a Python file named for that model type (for example, `llm.py`). Define a class in the file that implements the specific model logic, conforming to the system’s model interface specifications. +4. **Debug the Plugin**\ + Write unit and integration tests for the new provider functionality, ensuring that all components work as intended. + +*** + +### 1. **Create a Model Provider File** + +In your plugin’s `/provider` directory, create a `xinference.yaml` file. + +The `Xinference` family of models supports **LLM**, **Text Embedding**, and **Rerank** model types, so your `xinference.yaml` must include all three. + +**Example:** + +```yaml +provider: xinference # Identifies the provider +label: # Display name; can set both en_US (English) and zh_Hans (Chinese). If zh_Hans is not set, en_US is used by default. + en_US: Xorbits Inference +icon_small: # Small icon; store in the _assets folder of this provider’s directory. The same multi-language logic applies as with label. + en_US: icon_s_en.svg +icon_large: # Large icon + en_US: icon_l_en.svg +help: # Help information + title: + en_US: How to deploy Xinference + zh_Hans: 如何部署 Xinference + url: + en_US: https://github.com/xorbitsai/inference + +supported_model_types: # Model types Xinference supports: LLM/Text Embedding/Rerank +- llm +- text-embedding +- rerank + +configurate_methods: # Xinference is locally deployed and does not offer predefined models. Refer to its documentation to learn which model to use. Thus, we choose a customizable-model approach. +- customizable-model + +provider_credential_schema: + credential_form_schemas: +``` + +Next, define the `provider_credential_schema`. Since `Xinference` supports text-generation, embeddings, and reranking models, you can configure it as follows: + +```yaml +provider_credential_schema: + credential_form_schemas: + - variable: model_type + type: select + label: + en_US: Model type + zh_Hans: 模型类型 + required: true + options: + - value: text-generation + label: + en_US: Language Model + zh_Hans: 语言模型 + - value: embeddings + label: + en_US: Text Embedding + - value: reranking + label: + en_US: Rerank +``` + +Every model in Xinference requires a `model_name`: + +```yaml + - variable: model_name + type: text-input + label: + en_US: Model name + zh_Hans: 模型名称 + required: true + placeholder: + zh_Hans: 填写模型名称 + en_US: Input model name +``` + +Because Xinference must be locally deployed, users need to supply the server address (server\_url) and model UID. For instance: + +```yaml + - variable: server_url + label: + zh_Hans: 服务器 URL + en_US: Server url + type: text-input + required: true + placeholder: + zh_Hans: 在此输入 Xinference 的服务器地址,如 https://example.com/xxx + en_US: Enter the url of your Xinference, for example https://example.com/xxx + + - variable: model_uid + label: + zh_Hans: 模型 UID + en_US: Model uid + type: text-input + required: true + placeholder: + zh_Hans: 在此输入您的 Model UID + en_US: Enter the model uid +``` + +Once you’ve defined these parameters, the YAML configuration for your custom model provider is complete. Next, create the functional code files for each model defined in this config. + +### 2. Develop the Model Code + +Since Xinference supports llm, rerank, speech2text, and tts, you should create corresponding directories under /models, each containing its respective feature code. + +Below is an example for an llm-type model. You’d create a file named llm.py, then define a class—such as XinferenceAILargeLanguageModel—that extends \_\_base.large\_language\_model.LargeLanguageModel. This class should include: + +* **LLM Invocation** + +The core method for invoking the LLM, supporting both streaming and synchronous responses: + +```python +def _invoke( + self, + model: str, + credentials: dict, + prompt_messages: list[PromptMessage], + model_parameters: dict, + tools: Optional[list[PromptMessageTool]] = None, + stop: Optional[list[str]] = None, + stream: bool = True, + user: Optional[str] = None +) -> Union[LLMResult, Generator]: + """ + Invoke the large language model. + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param model_parameters: model parameters + :param tools: tools for tool calling + :param stop: stop words + :param stream: determines if response is streamed + :param user: unique user id + :return: full response or a chunk generator + """ +``` + +You’ll need two separate functions to handle streaming and synchronous responses. Python treats any function containing `yield` as a generator returning type `Generator`, so it’s best to split them: + +```yaml +def _invoke(self, stream: bool, **kwargs) -> Union[LLMResult, Generator]: + if stream: + return self._handle_stream_response(**kwargs) + return self._handle_sync_response(**kwargs) + +def _handle_stream_response(self, **kwargs) -> Generator: + for chunk in response: + yield chunk + +def _handle_sync_response(self, **kwargs) -> LLMResult: + return LLMResult(**response) +``` + +* **Pre-calculating Input Tokens** + +If your model doesn’t provide a token-counting interface, simply return 0: + +```python +def get_num_tokens( + self, + model: str, + credentials: dict, + prompt_messages: list[PromptMessage], + tools: Optional[list[PromptMessageTool]] = None +) -> int: + """ + Get the number of tokens for the given prompt messages. + """ + return 0 +``` + +Alternatively, you can call `self._get_num_tokens_by_gpt2(text: str)` from the `AIModel` base class, which uses a GPT-2 tokenizer. Remember this is an approximation and may not match your model exactly. + +* **Validating Model Credentials** + +Similar to provider-level credential checks, but scoped to a single model: + +```python +def validate_credentials(self, model: str, credentials: dict) -> None: + """ + Validate model credentials. + """ +``` + +* **Dynamic Model Parameters Schema** + +Unlike [predefined models](predefined-model.md), no YAML is defining which parameters a model supports. You must generate a parameter schema dynamically. + +For example, Xinference supports `max_tokens`, `temperature`, and `top_p`. Some other providers (e.g., `OpenLLM`) may support parameters like `top_k` only for certain models. This means you need to adapt your schema to each model’s capabilities: + +```python +def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None: + """ + used to define customizable model schema + """ + rules = [ + ParameterRule( + name='temperature', type=ParameterType.FLOAT, + use_template='temperature', + label=I18nObject( + zh_Hans='温度', en_US='Temperature' + ) + ), + ParameterRule( + name='top_p', type=ParameterType.FLOAT, + use_template='top_p', + label=I18nObject( + zh_Hans='Top P', en_US='Top P' + ) + ), + ParameterRule( + name='max_tokens', type=ParameterType.INT, + use_template='max_tokens', + min=1, + default=512, + label=I18nObject( + zh_Hans='最大生成长度', en_US='Max Tokens' + ) + ) + ] + + # if model is A, add top_k to rules + if model == 'A': + rules.append( + ParameterRule( + name='top_k', type=ParameterType.INT, + use_template='top_k', + min=1, + default=50, + label=I18nObject( + zh_Hans='Top K', en_US='Top K' + ) + ) + ) + + """ + some NOT IMPORTANT code here + """ + + entity = AIModelEntity( + model=model, + label=I18nObject( + en_US=model + ), + fetch_from=FetchFrom.CUSTOMIZABLE_MODEL, + model_type=model_type, + model_properties={ + ModelPropertyKey.MODE: ModelType.LLM, + }, + parameter_rules=rules + ) + + return entity +``` + +* **Error Mapping** + +When an error occurs during model invocation, map it to the appropriate InvokeError type recognized by the runtime. This lets Dify handle different errors in a standardized manner: + +Runtime Errors: + +``` +• `InvokeConnectionError` +• `InvokeServerUnavailableError` +• `InvokeRateLimitError` +• `InvokeAuthorizationError` +• `InvokeBadRequestError` +``` + +```python +@property +def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + """ + Map model invocation errors to unified error types. + The key is the error type thrown to the caller. + The value is the error type thrown by the model, which needs to be mapped to a + unified Dify error for consistent handling. + """ + # return { + # InvokeConnectionError: [requests.exceptions.ConnectionError], + # ... + # } +``` + +For more details on interface methods, see the [Model Documentation](https://docs.dify.ai/zh-hans/plugins/schema-definition/model). + +To view the complete code files discussed in this guide, visit the [GitHub Repository](https://github.com/langgenius/dify-official-plugins/tree/main/models/xinference). + +### 3. Debug the Plugin + +After finishing development, test the plugin to ensure it runs correctly. For more details, refer to: + + + debug-plugin.md + + +### 4. Publish the Plugin + +If you’d like to list this plugin on the Dify Marketplace, see: + +Publish to Dify Marketplace + +## Explore More + +**Quick Start:** + +* [Develop Extension Plugin](../extension-plugin.md) +* [Develop Tool Plugin](../tool-plugin.md) +* [Bundle Plugins: Package Multiple Plugins](../bundle.md) + +**Plugins Endpoint Docs:** + +* [Manifest](../../../schema-definition/manifest.md) Structure +* [Endpoint](../../../schema-definition/endpoint.md) Definitions +* [Reverse-Invocation of the Dify Service](../../../schema-definition/reverse-invocation-of-the-dify-service/) +* [Tools](../../../schema-definition/tool.md) +* [Models](../../../schema-definition/model/) diff --git a/en/plugins/quick-start/develop-plugins/model-plugin/predefined-model.mdx b/en/plugins/quick-start/develop-plugins/model-plugin/predefined-model.mdx new file mode 100644 index 00000000..586d082a --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/model-plugin/predefined-model.mdx @@ -0,0 +1,262 @@ +--- +title: Integrate the Predefined Model +--- + + +Before accessing a predefined model, make sure you have created a [model provider](create-model-providers.md). Accessing predefined models is roughly divided into the following steps: + +1. **Create Module Structures by Model Type** + + Create corresponding sub-modules under the provider module based on model types (such as `llm` or `text_embedding`). Ensure each model type has its own logical layer for easy maintenance and extension. +2. **Write Model Request Code** + + Create a Python file with the same name as the model type (e.g., llm.py) under the corresponding model type module. Define a class that implements specific model logic and complies with the system's model interface specifications. +3. **Add Predefined Model Configuration** + + If the provider offers predefined models, create `YAML` files named after each model (e.g., `claude-3.5.yaml`). Write file content according to [AIModelEntity](../../schema-definition/model/model-designing-rules.md#modeltype) specifications, describing model parameters and functionality. +4. **Test Plugin** + + Write unit tests and integration tests for newly added provider functionality to ensure all function modules meet expectations and operate normally. + +*** + +Below are the access details: + +### 1. Creation of different module structures by model type + +A model provider may offer different model types, for example, OpenAI provides types such as `llm` or `text_embedding`. You need to create corresponding sub-modules under the provider module, ensuring each model type has its own logical layer for easy maintenance and extension. + +Currently supported model types: + +* `llm`: Text generation models +* `text_embedding`: Text Embedding models +* `rerank`: Rerank models +* `speech2text`: Speech to text +* `tts`: Text to speech +* `moderation`: Content moderation + +Taking `Anthropic` as an example, since its model series only contains LLM type models, you only need to create an `/llm` folder under the `/models` path and add yaml files for different model versions. For detailed code structure, please refer to the [Github repository](https://github.com/langgenius/dify-official-plugins/tree/main/models/anthropic/models/llm). + +![](https://assets-docs.dify.ai/2024/12/b5ef5d7c759742e4c4d34865e8608843.png) + +```bash +├── models +│ └── llm +│ ├── _position.yaml +│ ├── claude-2.1.yaml +│ ├── claude-2.yaml +│ ├── claude-3-5-sonnet-20240620.yaml +│ ├── claude-3-haiku-20240307.yaml +│ ├── claude-3-opus-20240229.yaml +│ ├── claude-3-sonnet-20240229.yaml +│ ├── claude-instant-1.2.yaml +│ ├── claude-instant-1.yaml +│ └── llm.py +``` + +If the model provider contains multiple types of large models, e.g., the OpenAI family of models contains llm and text\_embedding, moderation, speech2text, and tts types of models, you need to create a folder for each type under the /models path. The structure is as follows: + +```bash +├── models +│ ├── common_openai.py +│ ├── llm +│ │ ├── _position.yaml +│ │ ├── chatgpt-4o-latest.yaml +│ │ ├── gpt-3.5-turbo.yaml +│ │ ├── gpt-4-0125-preview.yaml +│ │ ├── gpt-4-turbo.yaml +│ │ ├── gpt-4o.yaml +│ │ ├── llm.py +│ │ ├── o1-preview.yaml +│ │ └── text-davinci-003.yaml +│ ├── moderation +│ │ ├── moderation.py +│ │ └── text-moderation-stable.yaml +│ ├── speech2text +│ │ ├── speech2text.py +│ │ └── whisper-1.yaml +│ ├── text_embedding +│ │ ├── text-embedding-3-large.yaml +│ │ └── text_embedding.py +│ └── tts +│ ├── tts-1-hd.yaml +│ ├── tts-1.yaml +│ └── tts.py +``` + +It is recommended to prepare all model configurations before starting the model code implementation. For complete YAML rules, please refer to the [Model Design Rules](../../schema-definition/model/model-designing-rules.md). For more code details, please refer to the example [Github repository](https://github.com/langgenius/dify-official-plugins/tree/main/models). + +### 2. Writing Model Requesting Code + +Next, you need to create an `llm.py` code file under the `/models` path. Taking `Anthropic` as an example, create an Anthropic LLM class in `llm.py` named `AnthropicLargeLanguageModel`, inheriting from the `__base.large_language_model.LargeLanguageModel` base class. + +Here's example code for some functionality: + +* **LLM Request** + + The core method for requesting LLM, supporting both streaming and synchronous returns. + +```python +def _invoke(self, model: str, credentials: dict, + prompt_messages: list[PromptMessage], model_parameters: dict, + tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None, + stream: bool = True, user: Optional[str] = None) \ + -> Union[LLMResult, Generator]: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param model_parameters: model parameters + :param tools: tools for tool calling + :param stop: stop words + :param stream: is stream response + :param user: unique user id + :return: full response or stream response chunk generator result + """ +``` + +In the implementation, you need to be careful to use two functions to handle synchronized returns and streaming returns separately. This is because functions in Python that contain the yield keyword are recognized as generator functions, and their return type is fixed to Generator. synchronized return and streaming return need to be implemented independently in order to ensure that the logic is clear and to accommodate different return requirements. + +Here's the sample code (the parameters are simplified in the example, so please follow the full parameter list in the actual implementation): + +```python +def _invoke(self, stream: bool, **kwargs) -> Union[LLMResult, Generator]: + """Call the corresponding processing function based on return type.""" + if stream: + return self._handle_stream_response(**kwargs) + return self._handle_sync_response(**kwargs) + +def _handle_stream_response(self, **kwargs) -> Generator: + """Handle streaming response logic.""" + for chunk in response: # Assume response is a streaming data iterator + yield chunk + +def _handle_sync_response(self, **kwargs) -> LLMResult: + """Handle synchronous response logic.""" + return LLMResult(**response) # Assume response is a complete response dictionary +``` + +* **Pre-calculated number of input tokens** + +If the model does not provide an interface to pre-calculate tokens, it can simply return 0, which is used to indicate that the feature is not applicable or not implemented. Example: + +```python +def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], + tools: Optional[list[PromptMessageTool]] = None) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param tools: tools for tool calling + :return: + """ +``` + +* **Request Exception Error Mapping Table** + +When a model call encounters an exception, it needs to be mapped to the `InvokeError` type specified by Runtime, allowing Dify to handle different errors differently. + +Runtime Errors: + +* `InvokeConnectionError`: Connection error during invocation +* `InvokeServerUnavailableError`: Service provider unavailable +* `InvokeRateLimitError`: Rate limit reached +* `InvokeAuthorizationError`: Authorization failure during invocation +* `InvokeBadRequestError`: Invalid parameters in the invocation request + +```python +@property +def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + """ + Map model invoke error to unified error + The key is the error type thrown to the caller + The value is the error type thrown by the model, + which needs to be converted into a unified error type for the caller. + + :return: Invoke error mapping + """ +``` + +See the [Github code repository](https://github.com/langgenius/dify-official-plugins/blob/main/models/anthropic/models/llm/llm.py) for full code details. + +### **3.** Add Predefined Model Configurations + +If the provides predefined models, create YAML files for each model with the same name as the model name (e.g. claude-3.5.yaml). Write the contents of the file according to the AIModelEntity specification, describing the parameters and functionality of the model. + +`claude-3-5-sonnet-20240620` Model example code: + +```yaml +model: claude-3-5-sonnet-20240620 +label: + en_US: claude-3-5-sonnet-20240620 +model_type: llm +features: + - agent-thought + - vision + - tool-call + - stream-tool-call + - document +model_properties: + mode: chat + context_size: 200000 +parameter_rules: + - name: temperature + use_template: temperature + - name: top_p + use_template: top_p + - name: top_k + label: + zh_Hans: + en_US: Top k + type: int + help: + zh_Hans: + en_US: Only sample from the top K options for each subsequent token. + required: false + - name: max_tokens + use_template: max_tokens + required: true + default: 8192 + min: 1 + max: 8192 + - name: response_format + use_template: response_format +pricing: + input: '3.00' + output: '15.00' + unit: '0.000001' + currency: USD +``` + +### 4. Debugging Plugins + +Dify provides remote debugging method, go to "Plugin Management" page to get the debugging key and remote server address. Check here for more details: + + + debug-plugin.md + + +### Publishing Plugins + +You can now publish your plugin by uploading it to the [Dify Plugins code repository](https://github.com/langgenius/dify-plugins)! Before uploading, make sure your plugin follows the [plugin release guide](../../publish-plugins/publish-to-dify-marketplace.md). Once approved, the code will be merged into the master branch and automatically live in the [Dify Marketplace](https://marketplace.dify.ai/). + +#### Exploring More + +**Quick Start:** + +* [Develop Extension Type Plugin](../extension-plugin.md) +* [Develop Model Type Plugin](./) +* [Bundle Type Plugin: Package Multiple Plugins](../bundle.md) + +**Plugins Specification Definition Documentaiton:** + +* [Minifest](../../schema-definition/manifest.md) +* [Endpoint](../../schema-definition/endpoint.md) +* [Reverse Invocation of the Dify Service](../../schema-definition/reverse-invocation-of-the-dify-service/) +* [Tools](../../../guides/tools/) +* [Models](../../schema-definition/model/model-schema.md) +* [Extend Agent Strategy](../../schema-definition/agent.md) diff --git a/en/plugins/quick-start/develop-plugins/tool-plugin.mdx b/en/plugins/quick-start/develop-plugins/tool-plugin.mdx new file mode 100644 index 00000000..01e02b98 --- /dev/null +++ b/en/plugins/quick-start/develop-plugins/tool-plugin.mdx @@ -0,0 +1,359 @@ +--- +title: Tool Plugin +--- + + +Tool type plugins are external tools that can be referenced by Chatflow / Workflow / Agent application types to enhance the capabilities of Dify applications. For example, adding online search capabilities, image generation capabilities, etc. to an application. Tool Type Plugins provide a complete set of tools and API implementations. + +![](https://assets-docs.dify.ai/2024/12/7e7bcf1f9e3acf72c6917ea9de4e4613.png) + +In this article, a "Tool Plugin" refers to a complete project that includes the tool provider file, functional code, and other related components. A tool provider may encompass multiple Tools (which can be understood as additional functionalities offered by a single tool), structured as follows: + +``` +- Tool provider + - Tool A + - Tool B +``` + +![Tool structure](https://assets-docs.dify.ai/2025/02/60c4c86a317d865133aa460592eac079.png) + +This article uses `GoogleSearch` as an example of how to quickly develop a tool type of plugin. + +### **Prerequisites** + +* Dify plugin scaffolding tool +* Python, version ≥ 3.12 + +For detailed instructions on how to prepare scaffolding tools for plugin development, see [Initializing Development Tools](initialize-development-tools.md). + +### **Create New Project** + +Run the CLI tool to create a new dify plugin project: + +```bash +./dify-plugin-darwin-arm64 plugin init +``` + +If you have renamed the binary file to `dify` and copied it to the `/usr/local/bin` path, you can run the following command to create a new plugin project: + +```bash +dify plugin init +``` + +> In the following, the command-line tool `dify` is used. If issues occur, please replace the `dify` command with the appropriate path to your command-line tool. + +### Select plugin type and template + +There are three types of plugins: tool, model and extension. All templates within SDK are provided with full code projects. The following part will use the **Tool plugin** template as an example. + +> If you are already familiar in plugin development, please refer to the [Schema Definition](../../schema-definition/) to implement various types of plugins. + +![Plugins type](https://assets-docs.dify.ai/2024/12/dd3c0f9a66454e15868eabced7b74fd6.png) + +#### Configuring Plugin Permissions + +The plugin also needs to read permissions from the Dify platform to connect properly. The following permissions need to be granted for the example tool plugin: + +* Tools +* Apps +* Enable persistent storage Storage, allocate default size storage +* Allow registration of Endpoint + +> Use the arrow keys to select permissions within the terminal and the "Tab" button to grant permissions. + +After checking all the permission items, tap Enter to complete the creation of the plug-in. The system will automatically generate the plug-in project code. + +![Plugins permissions](https://assets-docs.dify.ai/2024/12/9cf92c2e74dce55e6e9e331d031e5a9f.png) + +### Developing Tools Plugins + +#### 1. Create the tool vendor yaml file + +The tool vendor file can be understood as the base configuration entry point for a tool type plugin, and is used to provide the necessary authorization information to the tool. This section demonstrates how to fill out that yaml file. + +Go to the `/provider` path and rename the yaml file in it to `google.yaml`. The `yaml` file will contain information about the tool vendor, including the provider name, icon, author, and other details. This information will be displayed when the plugin is installed. + +Example: + +```yaml +identity: + author: Your-name + name: google + label: + en_US: Google + zh_Hans: Google + description: + en_US: Google + zh_Hans: Google + icon: icon.svg + tags: + - search +``` + +* `identity` contains basic information about the tool provider, including author, name, label, description, icon, and more. + * The icon needs to be an attachment resource, which needs to be placed in the `_assets` folder in the project root directory. + * Tags help users quickly find plugins by category, here are all the tags currently supported. + * ```python + class ToolLabelEnum(Enum): + SEARCH = 'search' + IMAGE = 'image' + VIDEOS = 'videos' + WEATHER = 'weather' + FINANCE = 'finance' + DESIGN = 'design' + TRAVEL = 'travel' + SOCIAL = 'social' + NEWS = 'news' + MEDICAL = 'medical' + PRODUCTIVITY = 'productivity' + EDUCATION = 'education' + BUSINESS = 'business' + ENTERTAINMENT = 'entertainment' + UTILITIES = 'utilities' + OTHER = 'other' + ``` + +Make sure that the path to the file is in the `/tools` directory, the full path is below: + +```yaml +plugins: + tools: + - "google.yaml" +``` + +Where the `google.yaml` file needs to use its absolute path in the plugin project. + +* **Completion of third-party service credentials** + +For ease of development, we have chosen to use the Google Search API provided by a third-party service, SerpApi. SerpApi requires an API Key in order to use it, so you need to add the `credentials_for_provider` field to the `yaml` file. + +The full code is below: + +```yaml +identity: + author: Dify + name: google + label: + en_US: Google + zh_Hans: Google + pt_BR: Google + description: + en_US: Google + zh_Hans: GoogleSearch + pt_BR: Google + icon: icon.svg + tags: + - search +credentials_for_provider: #Add credentials_for_provider field + serpapi_api_key: + type: secret-input + required: true + label: + en_US: SerpApi API key + zh_Hans: SerpApi API key + placeholder: + en_US: Please input your SerpApi API key + zh_Hans: 请输入你的 SerpApi API key + help: + en_US: Get your SerpApi API key from SerpApi + zh_Hans: 从 SerpApi 获取您的 SerpApi API key + url: https://serpapi.com/manage-api-key +tools: + - tools/google_search.yaml +extra: + python: + source: google.py +``` + +* where the `credentials_for_provider` sub-level structure needs to satisfy the [ProviderConfig](../schema-definition/general-specifications.md#providerconfig) specification. +* It is necessary to specify which tools are included in this provider. This example only includes a `tools/google_search.yaml` file. +* For the provider, in addition to defining its basic information, you also need to implement some of its code logic, so you need to specify its implementation logic. In this example, the code file for the function is placed in `google.py`, but instead of implementing it for the time being, you write the code for `google_search` first. + +#### 2. Fill out the tool yaml file + +There can be multiple tools under a tool vendor, and each tool needs to be described by a `yaml` file, which contains basic information about the tool, its parameters, its output, and so on. + +Still using the `GoogleSearch` tool as an example, you can create a new `google_search.yaml` file in the `/tools` folder. + +```yaml +identity: + name: google_search + author: Dify + label: + en_US: GoogleSearch + zh_Hans: Google Search + pt_BR: GoogleSearch +description: + human: + en_US: A tool for performing a Google SERP search and extracting snippets and webpages.Input should be a search query. + zh_Hans: A tool for performing Google SERP search and extracting snippets and webpages. Input should be a search query. + pt_BR: A tool for performing a Google SERP search and extracting snippets and webpages.Input should be a search query. + llm: A tool for performing a Google SERP search and extracting snippets and webpages.Input should be a search query. +parameters: + - name: query + type: string + required: true + label: + en_US: Query string + zh_Hans: Query string + pt_BR: Query string + human_description: + en_US: used for searching + zh_Hans: used for searching webpage content + pt_BR: used for searching + llm_description: key words for searching + form: llm +extra: + python: + source: tools/google_search.py +``` + +* `identity` contains the tool's basic information, including name, author, labels, description, etc. +* `parameters` parameter list + * `name` (required) parameter name, must be unique, cannot duplicate other parameter names + * `type` (required) parameter type, currently supports five types: `string`, `number`, `boolean`, `select`, `secret-input`, corresponding to string, number, boolean, dropdown menu, and encrypted input field. For sensitive information, please use `secret-input` type + * `label` (required) parameter label, used for frontend display + * `form` (required) form type, currently supports two types: `llm` and `form` + * In Agent applications, `llm` means the parameter is inferred by LLM, `form` means parameters that can be preset to use the tool + * In workflow applications, both `llm` and `form` need to be filled in the frontend, but `llm` parameters will serve as input variables for tool nodes + * `required` whether the field is required + * In `llm` mode, if a parameter is required, the Agent must infer this parameter + * In `form` mode, if a parameter is required, users must fill in this parameter in the frontend before starting the conversation + * `options` parameter options + * In `llm` mode, Dify will pass all options to LLM, which can make inferences based on these options + * In `form` mode, when `type` is `select`, the frontend will display these options + * `default` default value + * `min` minimum value, can be set when parameter type is `number` + * `max` maximum value, can be set when parameter type is `number` + * `human_description` introduction displayed in frontend, supports multiple languages + * `placeholder` prompt text for input fields, can be set when form type is `form` and parameter type is `string`, `number`, or `secret-input`, supports multiple languages + * `llm_description` introduction passed to LLM. To help LLM better understand this parameter, please write as detailed information as possible about this parameter here so that LLM can understand it + +#### 3. Preparation of tool codes + +After filling in the configuration information of the tool, you can start writing the functional code of the tool to realize the logical purpose of the tool. Create `google_search.py` in the `/tools` directory with the following contents. + +```python +from collections.abc import Generator +from typing import Any + +import requests + +from dify_plugin import Tool +from dify_plugin.entities.tool import ToolInvokeMessage + +SERP_API_URL = "https://serpapi.com/search" + +class GoogleSearchTool(Tool): + def _parse_response(self, response: dict) -> dict: + result = {} + if "knowledge_graph" in response: + result["title"] = response["knowledge_graph"].get("title", "") + result["description"] = response["knowledge_graph"].get("description", "") + if "organic_results" in response: + result["organic_results"] = [ + { + "title": item.get("title", ""), + "link": item.get("link", ""), + "snippet": item.get("snippet", ""), + } + for item in response["organic_results"] + ] + return result + + def _invoke(self, tool_parameters: dict[str, Any]) -> Generator[ToolInvokeMessage]: + params = { + "api_key": self.runtime.credentials["serpapi_api_key"], + "q": tool_parameters["query"], + "engine": "google", + "google_domain": "google.com", + "gl": "us", + "hl": "en", + } + + response = requests.get(url=SERP_API_URL, params=params, timeout=5) + response.raise_for_status() + valuable_res = self._parse_response(response.json()) + + yield self.create_json_message(valuable_res) +``` + +In this example, we simply request the `serpapi` and use `self.create_json_message` to return a string of `json` formatted data. For more information on the types of data returned, you can refer to the [tool](../schema-definition/tool.md) documentation. + +#### 4. Completion of tool vendor codes + +Finally, you need to create a vendor code implementation code that will be used to implement the vendor's credential validation logic. If the credential validation fails, the `ToolProviderCredentialValidationError` exception will be thrown. After successful validation, the `google_search` tool service will be requested correctly. + +Create a `google.py` file in the `/provider` directory with the following code: + +```python +from typing import Any + +from dify_plugin import ToolProvider +from dify_plugin.errors.tool import ToolProviderCredentialValidationError +from tools.google_search import GoogleSearchTool + +class GoogleProvider(ToolProvider): + def _validate_credentials(self, credentials: dict[str, Any]) -> None: + try: + for _ in GoogleSearchTool.from_credentials(credentials).invoke( + tool_parameters={"query": "test", "result_type": "link"}, + ): + pass + except Exception as e: + raise ToolProviderCredentialValidationError(str(e)) +``` + +### Debugging Plugins + +Dify provides remote debugging method, go to "Plugin Management" page to get the debugging key and remote server address. + +![](https://assets-docs.dify.ai/2024/12/053415ef127f1f4d6dd85dd3ae79626a.png) + +Go back to the plugin project, copy the `.env.example` file and rename it to .env. Fill it with the remote server address and debugging key. + +The `.env` file: + +```bash +INSTALL_METHOD=remote +REMOTE_INSTALL_HOST=remote +REMOTE_INSTALL_PORT=5003 +REMOTE_INSTALL_KEY=****-****-****-****-**** +``` + +Run the `python -m main` command to launch the plugin. You can see on the plugin page that the plugin has been installed into Workspace. Other team members can also access the plugin. + +![](https://assets-docs.dify.ai/2024/11/0fe19a8386b1234755395018bc2e0e35.png) + +### Packing Plugin + +After confirming that the plugin works properly, you can package and name the plugin with the following command line tool. After running it you can find the `google.difypkg` file in the current folder, which is the final plugin package. + +```bash +# Replace ./google with your actual plugin project path. + +dify plugin package ./google +``` + +Congratulations, you have completed the complete development, debugging and packaging process of a tool type plugin! + +### Publishing Plugins + +You can now publish your plugin by uploading it to the [Dify Plugins code repository](https://github.com/langgenius/dify-plugins)! Before uploading, make sure your plugin follows the [plugin release guide](../publish-plugins/publish-to-dify-marketplace.md). Once approved, the code will be merged into the master branch and automatically live in the [Dify Marketplace](https://marketplace.dify.ai/). + +#### Exploring More + +**Quick Start:** + +* [Develop Extension Type Plugin](extension-plugin.md) +* [Develop Model Type Plugin](model-plugin/) +* [Bundle Type Plugin: Package Multiple Plugins](bundle.md) + +**Plugins Specification Definition Documentaiton:** + +* [Minifest](../schema-definition/manifest.md) +* [Endpoint](../schema-definition/endpoint.md) +* [Reverse Invocation of the Dify Service](../schema-definition/reverse-invocation-of-the-dify-service/) +* [Tools](../../guides/tools/) +* [Models](../schema-definition/model/model-schema.md) +* [Extend Agent Strategy](../schema-definition/agent.md) diff --git a/en/plugins/quick-start/install-plugins.mdx b/en/plugins/quick-start/install-plugins.mdx new file mode 100644 index 00000000..9acbf279 --- /dev/null +++ b/en/plugins/quick-start/install-plugins.mdx @@ -0,0 +1,94 @@ +--- +description: 'Author: Allen' +--- + +--- +title: Install and Use Plugins +--- + + +## Installing Plugins + +To install plugins, click **"Plugins"** in the top-right corner of the Dify platform to access the plugin management page. You can install plugins via **Marketplace, GitHub, or Manual Upload**. + +![Install Plugins](https://assets-docs.dify.ai/2025/01/a56c40245090d9252557dcc6f4064a14.png) + +#### Marketplace + +Browse and select a plugin from the Marketplace. Click **"Install"** to add it to your current workspace effortlessly. + +![Install via marketplace](https://assets-docs.dify.ai/2025/01/6ae8b661b7fa01b228a954d00ef552f3.png) + +#### GitHub + +Install plugins directly using GitHub repository links. This method requires plugins to meet code standards and include a `.difypkg` file attached to a Release. For more details, see [Publishing Plugins on GitHub](../publish-plugins/publish-plugin-on-personal-github-repo.md). + +![GitHub Installation](https://assets-docs.dify.ai/2025/01/4026a12a915e3fe9bd057d8827acfdce.png) + +#### Local Upload + +After [packaging your plugin](../publish-plugins/package-plugin-file-and-publish.md), upload the resulting `.difypkg` file. This option is ideal for offline or test environments and allows organizations to maintain internal plugins without exposing sensitive information. + +#### Authorizing Plugins + +Some plugins require API Keys or other authorization to function properly. After installation, enter the necessary credentials to enable the plugin. + +> API Keys are sensitive information and are only valid for the current user. Other team members must manually input their credentials to use the plugin. + +![Authorize Plugin](https://assets-docs.dify.ai/2024/11/972de4c9fa00f792a1ab734b080aafdc.png) + +*** + +## Using Plugins + +Once installed, plugins can be integrated into your Dify applications. Below are examples of how to use different types of plugins. + +### Model Plugins + +For example, after installing the `OpenAI` model plugin, click **Profile → Settings → Model Providers** in the top-right corner to configure your API Key and activate the provider. + +![Authorize OpenAI API Key](https://assets-docs.dify.ai/2025/01/3bf32d49975931e5924baa749aa7812f.png) + +Once authorized, the model can be used in all application types. + +![Using Model Plugin](https://assets-docs.dify.ai/2024/12/4a38b1ea534ca68515839c518c250d2f.png) + +### Tool Plugins + +Tool plugins can be used in **Chatflow**, **Workflow**, and **Agent** applications. Below is an example using the `Google` tool plugin. + +> Some tool plugins require API Key authorization before use. Configure these after installation for future convenience. + +#### Agent + +In an Agent application, locate the **"Tools"** section at the bottom of the application orchestration page. Select your installed tool plugins. + +When using the application, input instructions to utilize the tools. For example, entering "today's news" will invoke the Google plugin to retrieve online content. + +![Agent Tools](https://assets-docs.dify.ai/2024/12/78f833811cb0c3d5cbbb1a941cffc769.png) + +#### Chatflow / Workflow + +Chatflow and Workflow applications share the same orchestration canvas, so tool usage is identical. + +Click the **"+"** button at the end of a node, select the installed Google plugin, and connect it to the upstream nodes. + +![Chatflow / Workflow Tools](https://assets-docs.dify.ai/2024/12/7e7bcf1f9e3acf72c6917ea9de4e4613.png) + +In the plugin's input variables, enter the user query or other required information for online retrieval. + +![Tools input](https://assets-docs.dify.ai/2024/12/a67c4cffd8fdf33297d462b2e6d01d27.png) + +For usage methods of other plugin types, refer to their respective plugin detail pages. + +![Using Plugins](https://assets-docs.dify.ai/2025/01/9d826302637638f705a94f73bd653958.png) + +*** + +## Read More + +To learn how to get started with plugin development, refer to the following guide: + + + develop-plugins + diff --git a/en/plugins/schema-definition/README.mdx b/en/plugins/schema-definition/README.mdx new file mode 100644 index 00000000..01dc9c08 --- /dev/null +++ b/en/plugins/schema-definition/README.mdx @@ -0,0 +1,28 @@ +--- +title: Schema Specification +--- + + + + manifest.md + + + + endpoint.md + + + + model + + + + general-specifications.md + + + + persistent-storage.md + + + + reverse-invocation-of-the-dify-service + diff --git a/en/plugins/schema-definition/agent.mdx b/en/plugins/schema-definition/agent.mdx new file mode 100644 index 00000000..61a6b990 --- /dev/null +++ b/en/plugins/schema-definition/agent.mdx @@ -0,0 +1,404 @@ +--- +title: Agent +--- + + +**Agent Strategy Overview** + +An Agent Strategy is an extensible template that defines standard input content and output formats. By developing specific Agent strategy interface functionality, you can implement various Agent strategies such as CoT (Chain of Thought) / ToT (Tree of Thought) / GoT (Graph of Thought) / BoT (Backbone of Thought), and achieve complex strategies like [Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/overview/). + +### **Adding Fields in Manifest** + +To add Agent strategies in a plugin, add the `plugins.agent_strategies` field in the manifest.yaml file and define the Agent provider. Example code: + +```yaml +version: 0.0.2 +type: plugin +author: "langgenius" +name: "agent" +plugins: + agent_strategies: + - "provider/agent.yaml" +``` + +Some unrelated fields in the manifest file are omitted. For detailed Manifest format, refer to [Manifest](manifest.md). + +### **Defining the Agent Provider** + +Create an agent.yaml file with basic Agent provider information: + +```yaml +identity: + author: langgenius + name: agent + label: + en_US: Agent + zh_Hans: Agent + pt_BR: Agent + description: + en_US: Agent + zh_Hans: Agent + pt_BR: Agent + icon: icon.svg +strategies: + - strategies/function_calling.yaml +``` + +### **Defining and Implementing Agent Strategy** + +#### **Definition** + +Create a function\_calling.yaml file to define the Agent strategy code: + +```yaml +identity: + name: function_calling + author: Dify + label: + en_US: FunctionCalling + zh_Hans: FunctionCalling + pt_BR: FunctionCalling +description: + en_US: Function Calling is a basic strategy for agent, model will use the tools provided to perform the task. +parameters: + - name: model + type: model-selector + scope: tool-call&llm + required: true + label: + en_US: Model + - name: tools + type: array[tools] + required: true + label: + en_US: Tools list + - name: query + type: string + required: true + label: + en_US: Query + - name: max_iterations + type: number + required: false + default: 5 + label: + en_US: Max Iterations + max: 50 + min: 1 +extra: + python: + source: strategies/function_calling.py +``` + +The code format is similar to the [Tool](tool.md) standard format and defines four parameters: `model`, `tools`, `query`, and `max_iterations` to implement the most basic Agent strategy. This means that users can: + +* Select which model to use +* Choose which tools to utilize +* Configure the maximum number of iterations +* Input a query to start executing the Agent + +All these parameters work together to define how the Agent will process tasks and interact with the selected tools and models. + +#### Functional Implementation Coding + +**Retrieving Parameters** + +Based on the four parameters defined earlier, the model type parameter is model-selector, and the tool type parameter is a special array\[tools]. The retrieved parameters can be converted using the SDK’s built-in AgentModelConfig and list\[ToolEntity]. + +```python +from dify_plugin.interfaces.agent import AgentModelConfig, AgentStrategy, ToolEntity + +class FunctionCallingParams(BaseModel): + query: str + model: AgentModelConfig + tools: list[ToolEntity] | None + maximum_iterations: int = 3 + + class FunctionCallingAgentStrategy(AgentStrategy): + def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]: + """ + Run FunctionCall agent application + """ + fc_params = FunctionCallingParams(**parameters) +``` + +**Invoking the Model** + +Invoking a specific model is an essential capability of the Agent plugin. Use the session.model.invoke() function from the SDK to call the model. The required input parameters can be obtained from the model. + +Example Method Signature for Invoking the Model: + +```python +def invoke( + self, + model_config: LLMModelConfig, + prompt_messages: list[PromptMessage], + tools: list[PromptMessageTool] | None = None, + stop: list[str] | None = None, + stream: bool = True, + ) -> Generator[LLMResultChunk, None, None] | LLMResult: +``` + +You need to pass the model information (model\_config), prompt information (prompt\_messages), and tool information (tools). The prompt\_messages parameter can be referenced using the example code below, while tool\_messages require certain transformations. + +Refer to the example code for using invoke model: + +```python +from collections.abc import Generator +from typing import Any + +from pydantic import BaseModel + +from dify_plugin.entities.agent import AgentInvokeMessage +from dify_plugin.entities.model.llm import LLMModelConfig +from dify_plugin.entities.model.message import ( + PromptMessageTool, + SystemPromptMessage, + UserPromptMessage, +) +from dify_plugin.entities.tool import ToolParameter +from dify_plugin.interfaces.agent import AgentModelConfig, AgentStrategy, ToolEntity + +class FunctionCallingParams(BaseModel): + query: str + instruction: str | None + model: AgentModelConfig + tools: list[ToolEntity] | None + maximum_iterations: int = 3 + +class FunctionCallingAgentStrategy(AgentStrategy): + def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]: + """ + Run FunctionCall agent application + """ + # init params + fc_params = FunctionCallingParams(**parameters) + query = fc_params.query + model = fc_params.model + stop = fc_params.model.completion_params.get("stop", []) if fc_params.model.completion_params else [] + prompt_messages = [ + SystemPromptMessage(content="your system prompt message"), + UserPromptMessage(content=query), + ] + tools = fc_params.tools + prompt_messages_tools = self._init_prompt_tools(tools) + + # invoke llm + chunks = self.session.model.llm.invoke( + model_config=LLMModelConfig(**model.model_dump(mode="json")), + prompt_messages=prompt_messages, + stream=True, + stop=stop, + tools=prompt_messages_tools, + ) + + def _init_prompt_tools(self, tools: list[ToolEntity] | None) -> list[PromptMessageTool]: + """ + Init tools + """ + + prompt_messages_tools = [] + for tool in tools or []: + try: + prompt_tool = self._convert_tool_to_prompt_message_tool(tool) + except Exception: + # api tool may be deleted + continue + + # save prompt tool + prompt_messages_tools.append(prompt_tool) + + return prompt_messages_tools + + def _convert_tool_to_prompt_message_tool(self, tool: ToolEntity) -> PromptMessageTool: + """ + convert tool to prompt message tool + """ + message_tool = PromptMessageTool( + name=tool.identity.name, + description=tool.description.llm if tool.description else "", + parameters={ + "type": "object", + "properties": {}, + "required": [], + }, + ) + + parameters = tool.parameters + for parameter in parameters: + if parameter.form != ToolParameter.ToolParameterForm.LLM: + continue + + parameter_type = parameter.type + if parameter.type in { + ToolParameter.ToolParameterType.FILE, + ToolParameter.ToolParameterType.FILES, + }: + continue + enum = [] + if parameter.type == ToolParameter.ToolParameterType.SELECT: + enum = [option.value for option in parameter.options] if parameter.options else [] + + message_tool.parameters["properties"][parameter.name] = { + "type": parameter_type, + "description": parameter.llm_description or "", + } + + if len(enum) > 0: + message_tool.parameters["properties"][parameter.name]["enum"] = enum + + if parameter.required: + message_tool.parameters["required"].append(parameter.name) + + return message_tool + +``` + +**Invoking Tools** + +Invoking tools is also a crucial capability of the Agent plugin. Use self.session.tool.invoke() to call a tool. + +Example Method Signature for Invoking a Tool: + +```python +def invoke( + self, + provider_type: ToolProviderType, + provider: str, + tool_name: str, + parameters: dict[str, Any], + ) -> Generator[ToolInvokeMessage, None, None] +``` + +Required parameters include provider\_type, provider, tool\_name, and parameters. Typically, tool\_name and parameters are generated by the LLM during Function Calling. + +Example Code for Using invoke tool: + +```python +from dify_plugin.entities.tool import ToolProviderType + +class FunctionCallingAgentStrategy(AgentStrategy): + def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]: + """ + Run FunctionCall agent application + """ + fc_params = FunctionCallingParams(**parameters) + + # tool_call_name and tool_call_args parameter is obtained from the output of LLM + tool_instances = {tool.identity.name: tool for tool in fc_params.tools} if fc_params.tools else {} + tool_instance = tool_instances[tool_call_name] + tool_invoke_responses = self.session.tool.invoke( + provider_type=ToolProviderType.BUILT_IN, + provider=tool_instance.identity.provider, + tool_name=tool_instance.identity.name, + # add the default value + parameters={**tool_instance.runtime_parameters, **tool_call_args}, + ) +``` + +The output of the self.session.tool.invoke() function is a Generator, which requires stream parsing. + +Refer to the following function for parsing: + +```python +import json +from collections.abc import Generator +from typing import cast + +from dify_plugin.entities.agent import AgentInvokeMessage +from dify_plugin.entities.tool import ToolInvokeMessage + +def parse_invoke_response(tool_invoke_responses: Generator[AgentInvokeMessage]) -> str: + result = "" + for response in tool_invoke_responses: + if response.type == ToolInvokeMessage.MessageType.TEXT: + result += cast(ToolInvokeMessage.TextMessage, response.message).text + elif response.type == ToolInvokeMessage.MessageType.LINK: + result += ( + f"result link: {cast(ToolInvokeMessage.TextMessage, response.message).text}." + + " please tell user to check it." + ) + elif response.type in { + ToolInvokeMessage.MessageType.IMAGE_LINK, + ToolInvokeMessage.MessageType.IMAGE, + }: + result += ( + "image has been created and sent to user already, " + + "you do not need to create it, just tell the user to check it now." + ) + elif response.type == ToolInvokeMessage.MessageType.JSON: + text = json.dumps(cast(ToolInvokeMessage.JsonMessage, response.message).json_object, ensure_ascii=False) + result += f"tool response: {text}." + else: + result += f"tool response: {response.message!r}." + return result' +``` + +**Log** + +To view the Agent's thinking process, besides normal message returns, you can use specialized interfaces to display the entire Agent thought process in a tree structure. + +**Creating Logs** + +* This interface creates and returns an `AgentLogMessage`, which represents a node in the log tree. +* If a parent is passed in, it indicates this node has a parent node. +* The default status is "Success". However, if you want to better display the task execution process, you can first set the status to "start" to show a "in progress" log, then update the log status to "Success" after the task is completed. This way, users can clearly see the entire process from start to finish. +* The label will be used as the log title shown to users. + +```python +def create_log_message( + self, + label: str, + data: Mapping[str, Any], + status: AgentInvokeMessage.LogMessage.LogStatus = AgentInvokeMessage.LogMessage.LogStatus.SUCCESS, + parent: AgentInvokeMessage | None = None, +) -> AgentInvokeMessage +``` + +**Completing Logs** + +You can use the log completion endpoint to change the status if you previously set "start" as the initial status. + +```python +def finish_log_message( + self, + log: AgentInvokeMessage, + status: AgentInvokeMessage.LogMessage.LogStatus = AgentInvokeMessage.LogMessage.LogStatus.SUCCESS, + error: Optional[str] = None, +) -> AgentInvokeMessage +``` + +**Example Implementation** + +This example demonstrates a simple two-step execution process: first outputting a "Thinking" status log, then completing the actual task processing. + +```python +class FunctionCallingAgentStrategy(AgentStrategy): + def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]: + thinking_log = self.create_log_message( + data={"Query": parameters.get("query")}, + label="Thinking", + status=AgentInvokeMessage.LogMessage.LogStatus.START, + ) + + yield thinking_log + + llm_response = self.session.model.llm.invoke( + model_config=LLMModelConfig( + provider="openai", + model="gpt-4o-mini", + mode="chat", + completion_params={}, + ), + prompt_messages=[ + SystemPromptMessage(content="you are a helpful assistant"), + UserPromptMessage(content=parameters.get("query")), + ], + stream=False, + tools=[], + ) + + thinking_log = self.finish_log_message(log=thinking_log) + yield thinking_log + yield self.create_text_message(text=llm_response.message.content) +``` diff --git a/en/plugins/schema-definition/endpoint.mdx b/en/plugins/schema-definition/endpoint.mdx new file mode 100644 index 00000000..929e5ca7 --- /dev/null +++ b/en/plugins/schema-definition/endpoint.mdx @@ -0,0 +1,88 @@ +--- +title: Endpoint +--- + + +In this article, we will use the [Quick Start: Rainbow Cat project](../develop-plugins/extension-plugin.md) as an example to illustrate the structure of Endpoint within the plugin. For the complete plugin code, please refer to the [Github repository](https://github.com/langgenius/dify-plugin-sdks/tree/main/python/examples/neko). + +### **Group Definition** + +An `Endpoint` group is a collection of multiple `Endpoints`. When creating a new `Endpoint` in a `Dify` plugin, you may need to fill in the following configurations. + +![](https://assets-docs.dify.ai/2024/11/763dbf86e4319591415dc5a1b6948ccb.png) + +Besides the `Endpoint Name`, you can add new form items by writing group configuration information. After saving, you'll see multiple interfaces that will use the same configuration information. + +![](https://assets-docs.dify.ai/2024/11/b778b7093b7df0dc80a476c65ddcbe58.png) + +#### **Structure** + +* `settings` (map\[string] [ProviderConfig](general-specifications.md#providerconfig)): Endpoint configuration definitions +* `endpoints` (list\[string], required): Points to specific `endpoint` interface definitions + +```yaml +settings: + api_key: + type: secret-input + required: true + label: + en_US: API key + zh_Hans: API key + ja_Jp: API key + pt_BR: API key + placeholder: + en_US: Please input your API key + zh_Hans: 请输入你的 API key + ja_Jp: あなたの API key を入れてください + pt_BR: Por favor, insira sua chave API +endpoints: + - endpoints/duck.yaml + - endpoints/neko.yaml +``` + +### **Interface Definition** + +* `path` (string): Follows werkzeug interface standard +* `method` (string): Interface method, only supports HEAD GET POST PUT DELETE OPTIONS +* `extra` (object): Configuration information beyond basic info + * `python` (object) + * `source` (string): Source code implementing this interface + +```yaml +path: "/duck/" +method: "GET" +extra: + python: + source: "endpoints/duck.py" +``` + +### **Endpoint Implementation** + +Must implement a subclass inheriting from `dify_plugin.Endpoint` and implement the `_invoke` method. + +* **Input Parameters** + * `r` (Request): Request object from werkzeug + * `values` (Mapping): Path parameters parsed from the path + * `settings` (Mapping): Configuration information for this Endpoint +* **Return** + * Response object from werkzeug, supports streaming return + * Does not support direct string return + +**Example Code:** + +```python +import json +from typing import Mapping +from werkzeug import Request, Response +from dify_plugin import Endpoint + +class Duck(Endpoint): + def _invoke(self, r: Request, values: Mapping, settings: Mapping) -> Response: + """ + Invokes the endpoint with the given request. + """ + app_id = values["app_id"] + def generator(): + yield f"{app_id}
" + return Response(generator(), status=200, content_type="text/html") +``` diff --git a/en/plugins/schema-definition/general-specifications.mdx b/en/plugins/schema-definition/general-specifications.mdx new file mode 100644 index 00000000..9b6605e8 --- /dev/null +++ b/en/plugins/schema-definition/general-specifications.mdx @@ -0,0 +1,105 @@ +--- +title: General Specifications +--- + + +This article will briefly introduce common structures in plugin development. + +### **Path Specifications** + +When specifying file paths in Manifest or any yaml files, follow these two rules based on file types: + +* For multimedia files like images or videos (e.g., plugin `icon`), place them in the `_assets` folder under the plugin root directory. +* For regular text files like `.py` or `.yaml`, use the absolute path within the plugin project. + +### **Common Structures** + +When defining plugins, some data structures can be shared among tools, models, and Endpoints. Here are these shared structures. + +#### **I18nObject** + +`I18nObject` is an internationalization structure compliant with IETF BCP 47 standard, currently supporting four languages: + +* en\_US +* zh\_Hans +* ja\_Jp +* pt\_BR + +#### **ProviderConfig** + +`ProviderConfig` is a common provider form structure, applicable to both `Tool` and `Endpoint` + +* `name` (string): Form item name +* `label` (I18nObject, required): Follows IETF BCP 47 +* `type` (provider\_config\_type, required): Form type +* `scope` (provider\_config\_scope): Option range, varies with `type` +* `required` (bool): Cannot be empty +* `default` (any): Default value, only supports basic types `float` `int` `string` +* `options` (list\[provider\_config\_option]): Options, only used when type is `select` +* `helper` (object): Help documentation link label, follows IETF BCP 47 +* `url` (string): Help documentation link +* `placeholder` (object): Follows IETF BCP 47 + +#### ProviderConfigOption(object) + +* `value`(string, required):values +* `label`(object, required):comply with [IETF BCP 47](https://tools.ietf.org/html/bcp47) + +#### ProviderConfigType(string) + +* `secret-input` (string):Configuration information will be encrypted +* `text-input`(string):Plain text +* `select`(string):drop-down box +* `boolean`(bool):switchgear +* `model-selector`(object):Model configuration information, including vendor name, model name, model parameters, etc. +* `app-selector`(object):app id +* `tool-selector`(object):Tool configuration information, including tool vendor, name, parameters, etc. +* `dataset-selector`(string):TBD + +#### ProviderConfigScope(string) + +* When `type` is `model-selector` + * `all` + * `llm` + * `text-embedding` + * `rerank` + * `tts` + * `speech2text` + * `moderation` + * `vision` +* When `type` is `app-selector` + * `all` + * `chat` + * `workflow` + * `completion` +* When `type` is `tool-selector` + * `all` + * `plugin` + * `api` + * `workflow` + +#### **ModelConfig** + +* `provider` (string): Model provider name including plugin\_id, in the format of `langgenius/openai/openai` +* `model` (string): Specific model name +* `model_type` (enum): Model type enumeration, refer to this document + +#### **NodeResponse** + +* `inputs` (dict): Variables finally input to the node +* `outputs` (dict): Node output results +* `process_data` (dict): Data generated during node execution + +#### **ToolSelector** + +* `provider_id` (string): Tool provider name +* `tool_name` (string): Tool name +* `tool_description` (string): Tool description +* `tool_configuration` (dict\[str, Any]): Tool configuration information +* `tool_parameters` (dict\[str, dict]): Parameters requiring LLM inference + * `name` (string): Parameter name + * `type` (string): Parameter type + * `required` (bool): Whether required + * `description` (string): Parameter description + * `default` (any): Default value + * `options` (list\[string]): Available options diff --git a/en/plugins/schema-definition/manifest.mdx b/en/plugins/schema-definition/manifest.mdx new file mode 100644 index 00000000..afcaa7c3 --- /dev/null +++ b/en/plugins/schema-definition/manifest.mdx @@ -0,0 +1,101 @@ +--- +title: Manifest +--- + + +**Manifest File** A Manifest is a YAML-compliant file that defines the most basic information about a **plugin**, including but not limited to the plugin name, author, included tools, models, and other information. + +If this file's format is incorrect, both the plugin parsing and packaging processes will fail. + +### **Code Example** + +Below is a simple example of a Manifest file. The meaning and function of each data element will be explained below. For reference to other plugin codes, please check the [Github repository](https://github.com/langgenius/dify-plugin-sdks/tree/main/python/examples). + +```yaml +version: 0.0.1 +type: "plugin" +author: "Yeuoly" +name: "neko" +label: + en_US: "Neko" +created_at: "2024-07-12T08:03:44.658609186Z" +icon: "icon.svg" +resource: + memory: 1048576 + permission: + tool: + enabled: true + model: + enabled: true + llm: true + endpoint: + enabled: true + app: + enabled: true + storage: + enabled: true + size: 1048576 +plugins: + endpoints: + - "provider/neko.yaml" +meta: + version: 0.0.1 + arch: + - "amd64" + - "arm64" + runner: + language: "python" + version: "3.10" + entrypoint: "main" +privacy: "./privacy.md" +``` + +### **Structure** + +* `version` (version, required): Plugin version +* `type` (type, required): Plugin type, currently only supports `plugin`, will support `bundle` in the future +* `author` (string, required): Author, defined as organization name in Marketplace +* `label` (label, required): Multi-language names +* `created_at` (RFC3339, required): Creation time, must not be later than current time for Marketplace +* `icon` (asset, required): Icon path +* `resource` (object): Required resources + * `memory` (int64): Maximum memory usage, mainly related to AWS Lambda resource requests on SaaS, in bytes + * `permission` (object): Permission requests + * `tool` (object): Permission for reverse tool calls + * `enabled` (bool) + * `model` (object): Permission for reverse model calls + * `enabled` (bool) + * `llm` (bool) + * `text_embedding` (bool) + * `rerank` (bool) + * `tts` (bool) + * `speech2text` (bool) + * `moderation` (bool) + * `node` (object): Permission for reverse node calls + * `enabled` (bool) + * `endpoint` (object): Permission to register endpoints + * `enabled` (bool) + * `app` (object): Permission for reverse app calls + * `enabled` (bool) + * `storage` (object): Permission for persistent storage + * `enabled` (bool) + * `size` (int64): Maximum allowed persistent memory size in bytes +* `plugins` (object, required): List of YAML files defining specific plugin capabilities, absolute paths within plugin package + * Format + * `tools` (list\[string]): Extended [tool](tool.md) providers + * `models` (list\[string]): Extended [model](model/) providers + * `endpoints` (list\[string]): Extended [Endpoints](endpoint.md) providers + * `agent_strategies` (list\[string]): Extended Agent strategy providers + * Limitations + * Cannot extend both tools and models simultaneously + * Must have at least one extension + * Cannot extend both models and Endpoints simultaneously + * Currently supports only one provider per extension type +* `meta` (object) + * `version` (version, required): Manifest format version, initial version `0.0.1` + * `arch` (list\[string], required): Supported architectures, currently only `amd64` `arm64` + * `runner` (object, required): Runtime configuration + * `language` (string): Currently only supports python + * `version` (string): Language version, currently only supports `3.12` + * `entrypoint` (string): Program entry point, should be `main` for Python +* `privacy` (string, optional): An optional field specifying the relative path or URL to the plugin’s privacy policy file, such as `"./privacy.md`" or `"https://your-web/privacy"`. If you plan to publish the plugin to the Dify Marketplace, this field is required to provide a clear statement on user data usage and privacy. For more detailed instructions, please refer to the [Plugin Privacy Policy Guidelines](../publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.md#id-1.-list-the-types-of-data-collected). diff --git a/en/plugins/schema-definition/model/README.mdx b/en/plugins/schema-definition/model/README.mdx new file mode 100644 index 00000000..ebeb8f4b --- /dev/null +++ b/en/plugins/schema-definition/model/README.mdx @@ -0,0 +1,5 @@ +--- +title: Model +--- + + diff --git a/en/plugins/schema-definition/model/model-designing-rules.mdx b/en/plugins/schema-definition/model/model-designing-rules.mdx new file mode 100644 index 00000000..dac1cd4f --- /dev/null +++ b/en/plugins/schema-definition/model/model-designing-rules.mdx @@ -0,0 +1,184 @@ +--- +title: Model Designing Rules +--- + + +* Model provider rules are based on the [Provider](model-designing-rules.md#provider) entity. +* Model rules are based on the [AIModelEntity](model-designing-rules.md#provider) entity. + +> All entities below are based on Pydantic BaseModel and can be found in the entities module. + +### **Provider** + +* `provider` (string): Provider identifier, e.g., openai +* `label` (object): Provider display name, i18n, supports en\_US (English) and zh\_Hans (Chinese) + * `zh_Hans` (string) \[optional]: Chinese label, defaults to en\_US if not set + * `en_US` (string): English label +* `description` (object) \[optional]: Provider description, i18n + * `zh_Hans` (string) \[optional]: Chinese description + * `en_US` (string): English description +* `icon_small` (string) \[optional]: Provider small icon, stored in \_assets directory + * `zh_Hans` (string) \[optional]: Chinese icon + * `en_US` (string): English icon +* `icon_large` (string) \[optional]: Provider large icon, stored in \_assets directory + * `zh_Hans` (string) \[optional]: Chinese icon + * `en_US` (string): English icon +* `background` (string) \[optional]: Background color value, e.g., #FFFFFF, uses frontend default if empty +* `help` (object) \[optional]: Help information + * `title` (object): Help title, i18n + * `zh_Hans` (string) \[optional]: Chinese title + * `en_US` (string): English title + * `url` (object): Help link, i18n + * `zh_Hans` (string) \[optional]: Chinese link + * `en_US` (string): English link +* `supported_model_types` (array\[ModelType]): Supported model types +* `configurate_methods` (array\[ConfigurateMethod]): Configuration methods +* `provider_credential_schema` (\[ProviderCredentialSchema]): Provider credential specifications +* `model_credential_schema` (\[ModelCredentialSchema]): Model credential specifications + +### **AIModelEntity** + +* `model` (string): Model identifier, e.g., gpt-3.5-turbo +* `label` (object) \[optional]: Model display name, i18n + * `zh_Hans` (string) \[optional]: Chinese label + * `en_US` (string): English label +* `model_type` (\[ModelType]): Model type +* `features` (array\[\[ModelFeature]]) \[optional]: List of supported features +* `model_properties` (object): Model properties + * `mode` (\[LLMMode]): Mode (available for llm model type) + * `context_size` (int): Context size (available for llm and text-embedding types) + * `max_chunks` (int): Maximum number of chunks (available for text-embedding and moderation types) + * `file_upload_limit` (int): Maximum file upload limit in MB (available for speech2text type) + * `supported_file_extensions` (string): Supported file extensions, e.g., mp3,mp4 (available for speech2text type) + * `default_voice` (string): Default voice, must be one of: alloy,echo,fable,onyx,nova,shimmer (available for tts type) + * `voices` (list): Available voice list + * `mode` (string): Voice model (available for tts type) + * `name` (string): Voice model display name (available for tts type) + * `language` (string): Voice model supported languages (available for tts type) + * `word_limit` (int): Single conversion word limit, defaults to paragraph division (available for tts type) + * `audio_type` (string): Supported audio file extensions, e.g., mp3,wav (available for tts type) + * `max_workers` (int): Maximum concurrent tasks for text-to-audio conversion (available for tts type) + * `max_characters_per_chunk` (int): Maximum characters per chunk (available for moderation type) +* `parameter_rules` (array\[ParameterRule]) \[optional]: Model call parameter rules +* `pricing` (\[PriceConfig]) \[optional]: Pricing information +* `deprecated` (bool): Whether deprecated. If true, model won't show in list but configured ones can still be used. Default: False + +### **ModelType** + +* `llm`: Text generation model +* `text-embedding`: Text embedding model +* `rerank`: Rerank model +* `speech2text`: Speech to text +* `tts`: Text to speech +* `moderation`: Moderation + +### **ConfigurateMethod** + +* `predefined-model`: Predefined models Users only need to configure unified provider credentials to use predefined models under the provider. +* `customizable-model`: Custom models Users need to add credential configurations for each model. +* `fetch-from-remote`: Fetch from remote Similar to predefined-model configuration, only requires unified provider credentials, models are fetched from provider using credential information. + +### **ModelFeature** + +* `agent-thought`: Agent reasoning, generally models over 70B have chain-of-thought capability +* `vision`: Visual capability, i.e., image understanding +* `tool-call`: Tool calling +* `multi-tool-call`: Multiple tool calling +* `stream-tool-call`: Streaming tool calling + +### **FetchFrom** + +* `predefined-model`: Predefined models +* `fetch-from-remote`: Remote models + +### **LLMMode** + +* `completion`: Text completion +* `chat`: Conversation + +### **ParameterRule** + +* `name` (string): Actual parameter name for model calls +* `use_template` (string) \[optional]: Template usage Five preset variable content configuration templates: + * temperature + * top\_p + * frequency\_penalty + * presence\_penalty + * max\_tokens Can directly set template variable name in use\_template, will use default config from entities.defaults.PARAMETER\_RULE\_TEMPLATE +* `label` (object) \[optional]: Labels, i18n + * `zh_Hans` (string) \[optional]: Chinese label + * `en_US` (string): English label +* `type` (string) \[optional]: Parameter type + * `int`: Integer + * `float`: Float + * `string`: String + * `boolean`: Boolean +* `help` (string) \[optional]: Help information + * `zh_Hans` (string) \[optional]: Chinese help info + * `en_US` (string): English help info +* `required` (bool): Whether required, default False +* `default` (int/float/string/bool) \[optional]: Default value +* `min` (int/float) \[optional]: Minimum value, only for numeric types +* `max` (int/float) \[optional]: Maximum value, only for numeric types +* `precision` (int) \[optional]: Precision, decimal places, only for numeric types +* `options` (array\[string]) \[optional]: Dropdown options, only for string type + +### **PriceConfig** + +* `input` (float): Input price, i.e., Prompt price +* `output` (float): Output price, i.e., Return content price +* `unit` (float): Price unit, e.g., if priced per 1M tokens, unit token number is 0.000001 +* `currency` (string): Currency unit + +### **ProviderCredentialSchema** + +* `credential_form_schemas` (array\[CredentialFormSchema]): Credential form specifications + +### **ModelCredentialSchema** + +* `model` (object): Model identifier, default variable name is 'model' +* `label` (object): Model form item display name + * `en_US` (string): English + * `zh_Hans` (string) \[optional]: Chinese +* `placeholder` (object): Model prompt content + * `en_US` (string): English + * `zh_Hans` (string) \[optional]: Chinese +* `credential_form_schemas` (array\[CredentialFormSchema]): Credential form specifications + +### **CredentialFormSchema** + +* `variable` (string): Form item variable name +* `label` (object): Form item label + * `en_US` (string): English + * `zh_Hans` (string) \[optional]: Chinese +* `type` (\[FormType]): Form item type +* `required` (bool): Whether required +* `default` (string): Default value +* `options` (array\[FormOption]): Form item options for select or radio types +* `placeholder` (object): Form item placeholder for text-input type + * `en_US` (string): English + * `zh_Hans` (string) \[optional]: Chinese +* `max_length` (int): Maximum input length for text-input type, 0 means no limit +* `show_on` (array\[FormShowOnObject]): Show when other form items meet conditions, always show if empty + +#### **FormType** + +* `text-input`: Text input component +* `secret-input`: Password input component +* `select`: Single-select dropdown +* `radio`: Radio component +* `switch`: Switch component, only supports true and false + +#### **FormOption** + +* `label` (object): Label + * `en_US` (string): English + * `zh_Hans` (string) \[optional]: Chinese +* `value` (string): Dropdown option value +* `show_on` (array\[FormShowOnObject]): Show when other form items meet conditions, always show if empty + +#### **FormShowOnObject** + +* `variable` (string): Other form item variable name +* `value` (string): Other form item variable value + diff --git a/en/plugins/schema-definition/model/model-schema.mdx b/en/plugins/schema-definition/model/model-schema.mdx new file mode 100644 index 00000000..3a86eec4 --- /dev/null +++ b/en/plugins/schema-definition/model/model-schema.mdx @@ -0,0 +1,355 @@ +--- +title: Model Schema +--- + + +The endpoint methods and parameter descriptions that need to be implemented by the supplier and each model type are described here. + +### Model Providers + +Inherit from the `__base.model_provider.ModelProvider` base class, implement the following endpoint: + +#### Provider Credentials Validation + +```python +def validate_provider_credentials(self, credentials: dict) -> None: + """ + Validate provider credentials + You can choose any validate_credentials method of model type or implement validate method by yourself, + such as: get model list api + + if validate failed, raise exception + + :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. + """ +``` + +Credentials (object): Credential information Credential parameters are defined by the provider's YAML configuration file's `provider_credential_schema`, such as passing in `api_key`. If validation fails, throw the `errors.validate.CredentialsValidateFailedError` error. + +Note: Predefined models must fully implement this interface, while custom model providers can implement it simply as follows: + +```python +class XinferenceProvider(Provider): + def validate_provider_credentials(self, credentials: dict) -> None: + pass +``` + +### Models + +Models are divided into 5 different model types, each inheriting from different base classes and requiring implementation of different methods. + +#### Common Interfaces + +All models must uniformly implement the following 2 methods: + +**Model Credential Validation** + +Similar to provider credential validation, this is specifically for validating individual models. + +```python +def validate_credentials(self, model: str, credentials: dict) -> None: + """ + Validate model credentials + + :param model: model name + :param credentials: model credentials + :return: + """ +``` + +Parameters: + +* `model` (string): Model name +* `credentials` (object): Credential information Credential parameters are defined by the provider's YAML configuration file's `provider_credential_schema` or `model_credential_schema`, such as passing in `api_key`. If validation fails, throw the `errors.validate.CredentialsValidateFailedError` error. + +**Invocation Exception Error Mapping** + +When a model invocation encounters an exception, it needs to be mapped to the Runtime-specified InvokeError type to help Dify handle different errors accordingly. + +Runtime Errors: + +* `InvokeConnectionError`: Invocation connection error +* `InvokeServerUnavailableError`: Invocation service unavailable +* `InvokeRateLimitError`: Invocation rate limit reached +* `InvokeAuthorizationError`: Invocation authentication failed +* `InvokeBadRequestError`: Incorrect invocation parameters + +```python +@property +def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + """ + Map model invoke error to unified error + The key is the error type thrown to the caller + The value is the error type thrown by the model, + which needs to be converted into a unified error type for the caller. + + :return: Invoke error mapping + """ +``` + +You can also directly throw corresponding Errors and define them so that in subsequent calls, you can directly throw `InvokeConnectionError` and other exceptions. + +### Large Language Model (LLM) + +Inherit from `__base.large_language_model.LargeLanguageModel` base class, implement the following interfaces: + +#### LLM Invocation + +Implement the core method for LLM invocation, supporting both streaming and synchronous returns. + +```python +def _invoke(self, model: str, credentials: dict, + prompt_messages: list[PromptMessage], model_parameters: dict, + tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None, + stream: bool = True, user: Optional[str] = None) \ + -> Union[LLMResult, Generator]: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param model_parameters: model parameters + :param tools: tools for tool calling + :param stop: stop words + :param stream: is stream response + :param user: unique user id + :return: full response or stream response chunk generator result + """ +``` + +Parameters: + +* `model` (string): Model name +* `credentials` (object): Credential information Credential parameters are defined by the provider's YAML configuration file's `provider_credential_schema` or `model_credential_schema`, such as passing in `api_key` +* `prompt_messages` (array\[PromptMessage]): Prompt list + * For Completion-type models, only one UserPromptMessage element needs to be passed + * For Chat-type models, a list of SystemPromptMessage, UserPromptMessage, AssistantPromptMessage, ToolPromptMessage elements needs to be passed according to message type +* `model_parameters` (object): Model parameters defined by the model's YAML configuration's `parameter_rules` +* `tools` (array\[PromptMessageTool]) \[optional]: Tool list, equivalent to function calling functions +* `stop` (array\[string]) \[optional]: Stop sequences. Model output will stop before the defined string +* `stream` (bool): Whether to stream output, default True. Streaming returns Generator\[LLMResultChunk], non-streaming returns LLMResult +* `user` (string) \[optional]: Unique user identifier to help providers monitor and detect abuse + +Return: + +* Streaming returns Generator\[LLMResultChunk] +* Non-streaming returns LLMResult + +#### Pre-calculate Input Tokens + +If the model does not provide a pre-calculate tokens interface, directly return 0. + +```python +def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], + tools: Optional[list[PromptMessageTool]] = None) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param tools: tools for tool calling + :return: + """ +``` + +#### Optional: Get Custom Model Rules + +```python +def get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]: + """ + Get customizable model schema + + :param model: model name + :param credentials: model credentials + :return: model schema + """ +``` + +When the vendor supports adding custom LLMs, this method can be implemented to make the model rules available to the custom model, returning None by default. + +For most of the fine-tuned models under OpenAI vendor, you can get their base model by their fine-tuned model name, such as gpt-3.5-turbo-1106, and then return the predefined parameter rules of the base model, refer to the implementation of [OpenAI](https://github.com/langgenius/dify-official-plugins/tree/main/models/openai). + +### Text Embedding + +Inherit from `__base.text_embedding_model.TextEmbeddingModel` base class, implement the following interfaces: + +#### Embedding Invocation + +```python +def _invoke(self, model: str, credentials: dict, + texts: list[str], user: Optional[str] = None) \ + -> TextEmbeddingResult: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param texts: texts to embed + :param user: unique user id + :return: embeddings result + """ +``` + +Parameters: + +* `model` (string): Model name +* `credentials` (object): Credential information Credential parameters are defined by the provider's YAML configuration file's `provider_credential_schema` or `model_credential_schema` +* `texts` (array\[string]): Text list, can be processed in batch +* `user` (string) \[optional]: Unique user identifier to help providers monitor and detect abuse + +Return: + +* TextEmbeddingResult entity + +#### Pre-calculate Tokens + +```python +def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param texts: texts to embed + :return: + """ +``` + +Similar to LargeLanguageModel, this interface needs to select an appropriate tokenizer based on the model. If the model does not provide a tokenizer, it can use the `_get_num_tokens_by_gpt2(text: str)` method in the AIModel base class. + +### Rerank + +Inherit from `__base.rerank_model.RerankModel` base class, implement the following interfaces: + +#### Rerank Invocation + +```python +def _invoke(self, model: str, credentials: dict, + query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, + user: Optional[str] = None) \ + -> RerankResult: + """ + Invoke rerank model + + :param model: model name + :param credentials: model credentials + :param query: search query + :param docs: docs for reranking + :param score_threshold: score threshold + :param top_n: top n + :param user: unique user id + :return: rerank result + """ +``` + +Parameters: + +* `model` (string): Model name +* `credentials` (object): Credential information +* `query` (string): Search query content +* `docs` (array\[string]): List of segments to be re-ranked +* `score_threshold` (float) \[optional]: Score threshold +* `top_n` (int) \[optional]: Take top n segments +* `user` (string) \[optional]: Unique user identifier to help providers monitor and detect abuse + +Return: + +* RerankResult entity + +### Speech2Text + +Inherit from `__base.speech2text_model.Speech2TextModel` base class, implement the following interfaces: + +#### Invoke Invocation + +```python +def _invoke(self, model: str, credentials: dict, + file: IO[bytes], user: Optional[str] = None) \ + -> str: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param file: audio file + :param user: unique user id + :return: text for given audio file + """ +``` + +Parameters: + +* `model` (string): Model name +* `credentials` (object): Credential information +* `file` (File): File stream +* `user` (string) \[optional]: Unique user identifier to help providers monitor and detect abuse + +Return: + +* Converted text string from speech + +### Text2Speech + +Inherit from `__base.text2speech_model.Text2SpeechModel` base class, implement the following interfaces: + +#### Invoke Invocation + +```python +def _invoke(self, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None): + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param content_text: text content to be translated + :param streaming: output is streaming + :param user: unique user id + :return: translated audio file + """ +``` + +Parameters: + +* `model` (string): Model name +* `credentials` (object): Credential information +* `content_text` (string): Text content to be converted +* `streaming` (bool): Whether to stream output +* `user` (string) \[optional]: Unique user identifier to help providers monitor and detect abuse + +Return: + +* Audio stream converted from text + +### Moderation + +Inherit from `__base.moderation_model.ModerationModel` base class, implement the following interfaces: + +#### Invoke Invocation + +```python +def _invoke(self, model: str, credentials: dict, + text: str, user: Optional[str] = None) \ + -> bool: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param text: text to moderate + :param user: unique user id + :return: false if text is safe, true otherwise + """ +``` + +Parameters: + +* `model` (string): Model name +* `credentials` (object): Credential information +* `text` (string): Text content +* `user` (string) \[optional]: Unique user identifier to help providers monitor and detect abuse + +Return: + +* False indicates the input text is safe, True indicates otherwise diff --git a/en/plugins/schema-definition/persistent-storage.mdx b/en/plugins/schema-definition/persistent-storage.mdx new file mode 100644 index 00000000..982dc4c5 --- /dev/null +++ b/en/plugins/schema-definition/persistent-storage.mdx @@ -0,0 +1,57 @@ +--- +title: Persistent Storage +--- + + +If you look at the Tool and Endpoint in the plug-in alone, it is not difficult to find that in most cases it can only complete a single round of interaction, the request returns the data, and the task ends. + +If there is a need for long-term storage of data, such as the implementation of persistent memory, the plug-in needs to have persistent storage capabilities. **Persistent storage mechanism allows plugins to have the ability to store data persistently in the same Workspace** , currently through the provision of KV database to meet the storage needs , the future may be based on the actual use of the introduction of more flexible and more powerful storage endpoints . + +### Storage Key + +#### **Entrance** + +```python + self.session.storage +``` + +#### Endpoints + +```python + def set(self, key: str, val: bytes) -> None: + pass +``` + +You can notice that a bytes is passed in, so you can actually store files in it. + +### Get Key + +#### **Entrance** + +```python + self.session.storage +``` + +#### **Endpoint** + +```python + def get(self, key: str) -> bytes: + pass +``` + +### Delete Key + +#### **Entrance** + +```python + self.session.storage +``` + +#### **Endpoint** + +```python + def delete(self, key: str) -> None: + pass +``` + +\ diff --git a/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/README.mdx b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/README.mdx new file mode 100644 index 00000000..12774349 --- /dev/null +++ b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/README.mdx @@ -0,0 +1,22 @@ +--- +title: Reverse Invocation of the Dify Service +--- + + +Plugins can request some of the services within the main Dify platform to enhance it's capabilities by reverse invocation. + +### Callable Dify Modules + +* [**App**](app.md) + + Plugins can access data from Apps within the Dify platform. +* [**Model**](model.md) + + Plugins can make reverse calls to LLM capabilities within the Dify platform, including all model types and features available on the platform, such as TTS, Rerank, etc. +* [**Tool**](tool.md) + + Plugins can request other tool-type plugins within the Dify platform. +* [**Node**](node.md) + + Plugins can request nodes within specific Chatflow/Workflow applications on the Dify platform. + diff --git a/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/app.mdx b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/app.mdx new file mode 100644 index 00000000..22daad9e --- /dev/null +++ b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/app.mdx @@ -0,0 +1,105 @@ +--- +title: App +--- + + +Reverse App requesting means plugins can access App data in Dify. This module supports both streaming and non-streaming App calls. + +### **Endpoints Type:** + +* For `Chatbot/Agent/Chatflow` type applications, they are all chat-type applications, having the same input and output parameters, thus can be uniformly viewed as a **Chat Interface**. +* For Workflow applications, they occupy a separate **Workflow Interface**. +* For Completion (text generation) applications, they occupy a separate **Completion Interface**. + +Note: Plugins can only access Apps within the same Workspace as the plugin. + +### **Requesting Chat Interface** **Entry Point** + +#### Entry + +```python +self.session.app.chat +``` + +#### **Endpoint Specification** + +```python +def invoke( + self, + app_id: str, + inputs: dict, + response_mode: Literal["streaming", "blocking"], + conversation_id: str, + files: list, +) -> Generator[dict, None, None] | dict: + pass +``` + +When `response_mode` is `streaming`, the interface returns `Generator[dict]`, otherwise returns `dict`. For specific interface fields, refer to `ServiceApi` return results. + +#### **Example** + +We can request a Chat type App in an `Endpoint` and directly return the results: + +```python +import json +from typing import Mapping +from werkzeug import Request, Response +from dify_plugin import Endpoint + +class Duck(Endpoint): + def _invoke(self, r: Request, values: Mapping, settings: Mapping) -> Response: + """ + Invokes the endpoint with the given request. + """ + app_id = values["app_id"] + def generator(): + response = self.session.app.workflow.invoke( + app_id=app_id, inputs={}, response_mode="streaming", files=[] + ) + for data in response: + yield f"{json.dumps(data)}
" + return Response(generator(), status=200, content_type="text/html") +``` + +**Requesting Workflow Endpint** **Entry Point** + +#### Entry + +```python +self.session.app.workflow +``` + +#### **Endpoint Specification** + +```python +def invoke( + self, + app_id: str, + inputs: dict, + response_mode: Literal["streaming", "blocking"], + files: list, +) -> Generator[dict, None, None] | dict: + pass +``` + +### **Requesting Completion Endpoint** + +#### Entry + +```python +self.session.app.completion +``` + +**Endpoint Specification** + +```python +def invoke( + self, + app_id: str, + inputs: dict, + response_mode: Literal["streaming", "blocking"], + files: list, +) -> Generator[dict, None, None] | dict: + pass +``` diff --git a/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/model.mdx b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/model.mdx new file mode 100644 index 00000000..1363fb36 --- /dev/null +++ b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/model.mdx @@ -0,0 +1,268 @@ +--- +title: Model +--- + + +Reverse Model Request refers to the plugin's ability to make reverse requests to LLM capabilities within Dify, including all model types and features on the platform, such as TTS, Rerank, etc. + +Note that requesting models requires passing a ModelConfig type parameter. Its structure can be referenced in Common Specification Definitions, and this structure will have slight differences for different types of models. + +For example, for LLM type models, it needs to include `completion_params` and `mode` parameters. You can manually build this structure or use `model-selector` type parameters or configuration. + +### **Request LLM** + +#### Entry + +```python +self.session.model.llm +``` + +#### Endpoint: + +```python +def invoke( + self, + model_config: LLMModelConfig, + prompt_messages: list[PromptMessage], + tools: list[PromptMessageTool] | None = None, + stop: list[str] | None = None, + stream: bool = True, +) -> Generator[LLMResultChunk, None, None] | LLMResult: + pass +``` + +Note: If the model you're requesting doesn't have tool\_call capability, the tools passed here won't take effect. + +#### **Example** + +If you want to request OpenAI's gpt-4o-mini model in a Tool, refer to the following example code: + +```python +from collections.abc import Generator +from typing import Any + +from dify_plugin import Tool +from dify_plugin.entities.model.llm import LLMModelConfig +from dify_plugin.entities.tool import ToolInvokeMessage +from dify_plugin.entities.model.message import SystemPromptMessage, UserPromptMessage + +class LLMTool(Tool): + def _invoke(self, tool_parameters: dict[str, Any]) -> Generator[ToolInvokeMessage]: + response = self.session.model.llm.invoke( + model_config=LLMModelConfig( + provider='openai', + model='gpt-4o-mini', + mode='chat', + completion_params={} + ), + prompt_messages=[ + SystemPromptMessage( + content='you are a helpful assistant' + ), + UserPromptMessage( + content=tool_parameters.get('query') + ) + ], + stream=True + ) + + for chunk in response: + if chunk.delta.message: + assert isinstance(chunk.delta.message.content, str) + yield self.create_text_message(text=chunk.delta.message.content) +``` + +Notice that the `query` parameter from `tool_parameters` is passed in the code. + +### **Best Practices** + +It's not recommended to manually build `LLMModelConfig`. Instead, allow users to select their desired model in the UI. In this case, you can modify the tool's parameter list by adding a `model` parameter according to the following configuration: + +```yaml +identity: + name: llm + author: Dify + label: + en_US: LLM + zh_Hans: LLM + pt_BR: LLM +description: + human: + en_US: A tool for invoking a large language model + zh_Hans: 用于调用大型语言模型的工具 + pt_BR: A tool for invoking a large language model + llm: A tool for invoking a large language model +parameters: + - name: prompt + type: string + required: true + label: + en_US: Prompt string + zh_Hans: 提示字符串 + pt_BR: Prompt string + human_description: + en_US: used for searching + zh_Hans: 用于搜索网页内容 + pt_BR: used for searching + llm_description: key words for searching + form: llm + - name: model + type: model-selector + scope: llm + required: true + label: + en_US: Model + zh_Hans: 使用的模型 + pt_BR: Model + human_description: + en_US: Model + zh_Hans: 使用的模型 + pt_BR: Model + llm_description: which Model to invoke + form: form +extra: + python: + source: tools/llm.py +``` + +Note that in this example, the model's scope is specified as llm, so users can only select `llm` type parameters. This allows you to modify the above example code as follows: + +```python +from collections.abc import Generator +from typing import Any + +from dify_plugin import Tool +from dify_plugin.entities.model.llm import LLMModelConfig +from dify_plugin.entities.tool import ToolInvokeMessage +from dify_plugin.entities.model.message import SystemPromptMessage, UserPromptMessage + +class LLMTool(Tool): + def _invoke(self, tool_parameters: dict[str, Any]) -> Generator[ToolInvokeMessage]: + response = self.session.model.llm.invoke( + model_config=tool_parameters.get('model'), + prompt_messages=[ + SystemPromptMessage( + content='you are a helpful assistant' + ), + UserPromptMessage( + content=tool_parameters.get('query') + ) + ], + stream=True + ) + + for chunk in response: + if chunk.delta.message: + assert isinstance(chunk.delta.message.content, str) + yield self.create_text_message(text=chunk.delta.message.content) +``` + +### **Request Summary** + +You can request this endpoint to summarize a text. It will use the system model in your current workspace to summarize the text. + +**Entry**: + +```python +self.session.model.summary +``` + +**Endpoint**: + +* `text`: The text to be summarized +* `instruction`: Additional instructions you want to add, allowing you to stylize the summary + +```python +def invoke( + self, text: str, instruction: str, +) -> str: +``` + +**Request TextEmbedding** + +**Entry** + +```python +self.session.model.text_embedding +``` + +**Endpoint** + +```python +def invoke( + self, model_config: TextEmbeddingResult, texts: list[str] +) -> TextEmbeddingResult: + pass +``` + +### **Request Rerank** + +#### Entry + +```python +self.session.model.rerank +``` + +#### Endpoint + +```python +def invoke( + self, model_config: RerankModelConfig, docs: list[str], query: str +) -> RerankResult: + pass +``` + +### **Request TTS** + +#### Entry + +```python +self.session.model.tts +``` + +#### Endpoint + +```python +def invoke( + self, model_config: TTSModelConfig, content_text: str +) -> Generator[bytes, None, None]: + pass +``` + +Note: The bytes stream returned by the TTS endpoint is an mp3 audio byte stream, with each iteration returning a complete audio. If you want to perform more in-depth processing tasks, please select an appropriate library. + +### **Request Speech2Text** + +**Entry**: + +```python +self.session.model.speech2text +``` + +**Endpoint**: + +```python +def invoke( + self, model_config: Speech2TextModelConfig, file: IO[bytes] +) -> str: + pass +``` + +Where file is an mp3-encoded audio file. + +### **Request Moderation** + +**Entry**: + +```python +self.session.model.moderation +``` + +**Endpoint**: + +```python +def invoke(self, model_config: ModerationModelConfig, text: str) -> bool: + pass +``` + +If this endpoint returns `true`, it indicates that the `text` contains sensitive content. diff --git a/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/node.mdx b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/node.mdx new file mode 100644 index 00000000..40e2f236 --- /dev/null +++ b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/node.mdx @@ -0,0 +1,88 @@ +--- +title: Node +--- + + +Reverse Node Request refers to the plugin's ability to access certain nodes within Dify's Chatflow/Workflow applications. + +The `ParameterExtractor` and `QuestionClassifier` nodes in `Workflow` encapsulate complex Prompt and code logic that can accomplish many tasks that are difficult to solve with hard coding through LLM. Plugins can request these two nodes. + +### **Request Parameter Extractor Node** + +**Entry**: + +```python +self.session.workflow_node.parameter_extractor +``` + +**Endpoint**: + +```python +def invoke( + self, + parameters: list[ParameterConfig], + model: ModelConfig, + query: str, + instruction: str = "", +) -> NodeResponse + pass +``` + +Where `parameters` is the list of parameters to extract, `model` follows the `LLMModelConfig` specification, `query` is the source text for parameter extraction, `instruction` contains additional instructions for the LLM, and `NodeResponse` structure can be referenced in the documentation. + +#### **Example** + +If you want to extract a person's name from a conversation, refer to this code: + +```python +from collections.abc import Generator +from dify_plugin.entities.tool import ToolInvokeMessage +from dify_plugin import Tool +from dify_plugin.entities.workflow_node import ModelConfig, ParameterConfig + +class ParameterExtractorTool(Tool): + def _invoke( + self, tool_parameters: dict + ) -> Generator[ToolInvokeMessage, None, None]: + response = self.session.workflow_node.parameter_extractor.invoke( + parameters=[ + ParameterConfig( + name="name", + description="name of the person", + required=True, + type="string", + ) + ], + model=ModelConfig( + provider="langgenius/openai/openai", + name="gpt-4o-mini", + completion_params={}, + ), + query="My name is John Doe", + instruction="Extract the name of the person", + ) + yield self.create_text_message(response.outputs["name"]) +``` + +### **Request Question Classifier Node** + +**Entry**: + +```python +self.session.workflow_node.question_classifier +``` + +**Endpoint**: + +```python +def invoke( + self, + classes: list[ClassConfig], + model: ModelConfig, + query: str, + instruction: str = "", +) -> NodeResponse: + pass +``` + +This endpoint's parameters are consistent with `ParameterExtractor`, and the final result is stored in `NodeResponse.outputs['class_name']`. diff --git a/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/tool.mdx b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/tool.mdx new file mode 100644 index 00000000..0c1db454 --- /dev/null +++ b/en/plugins/schema-definition/reverse-invocation-of-the-dify-service/tool.mdx @@ -0,0 +1,75 @@ +--- +title: Tool +--- + + +When encountering the following scenarios: + +* A tool-type plugin has implemented a feature but hasn't met expectations and needs data reprocessing +* A task requires web crawling and needs flexibility in choosing crawling services +* Need to combine multiple tools' return results but difficult to handle through Workflow applications + +In these cases, you need to request other implemented tools within the plugin. These tools could be from marketplace tool plugins, self-built Workflow as a Tool, or custom tools. + +The above requirements can be achieved by using the plugin's `self.session.tool` field. + +### **Request Installed Tools** + +Allows plugins to request various tools installed in the current Workspace, including other tool-type plugins. + +**Entry**: + +```python +self.session.tool +``` + +**Endpoint**: + +```python +def invoke_builtin_tool( + self, provider: str, tool_name: str, parameters: dict[str, Any] +) -> Generator[ToolInvokeMessage, None, None]: + pass +``` + +Where `provider` is the plugin ID plus tool provider name, formatted like `langgenius/google/google`, `tool_name` is the specific tool name, and `parameters` are the parameters passed to that tool. + +### **Request Workflow as Tool** + +For more information about Workflow as Tool, please refer to this documentation. + +**Entry**: + +```python +self.session.tool +``` + +**Endpoint**: + +```python +def invoke_workflow_tool( + self, provider: str, tool_name: str, parameters: dict[str, Any] +) -> Generator[ToolInvokeMessage, None, None]: + pass +``` + +Here, `provider` is the tool's ID, and `tool_name` is required when creating the tool. + +**Request Custom Tool** + +**Entry**: + +```python +self.session.tool +``` + +**Endpoint**: + +```python +def invoke_api_tool( + self, provider: str, tool_name: str, parameters: dict[str, Any] +) -> Generator[ToolInvokeMessage, None, None]: + pass +``` + +Here, `provider` is the tool's ID, `tool_name` is the `operation_id` in OpenAPI. If it doesn't exist, it's the `tool_name` automatically generated by Dify, which can be seen in the tool management page. diff --git a/en/plugins/schema-definition/tool.mdx b/en/plugins/schema-definition/tool.mdx new file mode 100644 index 00000000..b560d10d --- /dev/null +++ b/en/plugins/schema-definition/tool.mdx @@ -0,0 +1,112 @@ +--- +title: Tool +--- + + +Before reading the detailed interface documentation, make sure you have read [Quick start: Tools](../develop-plugins/tool-plugin.md) and have a general understanding of the Dify plugin's tool access process. + +### **Data Structures** + +#### **Message Returns** + +Dify supports multiple message types including `text`, `links`, `images`, `file BLOBs`, and `JSON`. You can return different types of messages through various interfaces. + +By default, a tool's output in a `workflow` contains three fixed variables: `files`, `text`, and `json`. You can return data for these three variables using the methods below. + +For example, use `create_image_message` to return images. Tools also support custom output variables for easier reference in `workflow`. + +#### **Image URL** + +Simply pass the image URL, and Dify will automatically download and return the image to users. + +```python +def create_image_message(self, image: str) -> ToolInvokeMessage: + pass +``` + +#### **Links** + +Use this interface to return a link: + +```python +def create_link_message(self, link: str) -> ToolInvokeMessage: + pass +``` + +#### **Text** + +Use this interface to return a text message: + +```python +def create_text_message(self, text: str) -> ToolInvokeMessage: + pass +``` + +#### **Files** + +Use this interface to return raw file data (images, audio, video, PPT, Word, Excel, etc.): + +* `blob`: Raw file data in bytes +* `meta`: File metadata. Specify `mime_type` if needed, otherwise Dify uses `octet/stream` as default + +```python +def create_blob_message(self, blob: bytes, meta: dict = None) -> ToolInvokeMessage: + pass +``` + +#### **JSON** + +Use this interface to return formatted JSON. Commonly used for data transfer between workflow nodes. Most large models can read and understand JSON in agent mode. + +```python +def create_json_message(self, json: dict) -> ToolInvokeMessage: + pass +``` + +#### **Variables** + +For non-streaming output variables, use this interface. Later values override earlier ones: + +```python +def create_variable_message(self, variable_name: str, variable_value: Any) -> ToolInvokeMessage: + pass +``` + +#### **Streaming Variables** + +For typewriter-effect text output, use streaming variables. If you reference this variable in a chatflow application's answer node, text will display with a typewriter effect. Currently only supports string data: + +```python +def create_stream_variable_message( + self, variable_name: str, variable_value: str +) -> ToolInvokeMessage: +``` + +#### **Return Variable Definitions** + +To reference tool output variables in workflow applications, you need to define possible output variables beforehand. Dify plugins support `json_schema` format output variable definitions. Here's a simple example: + +```yaml +identity: + author: author + name: tool + label: + en_US: label + zh_Hans: 标签 + ja_JP: ラベル + pt_BR: etiqueta +description: + human: + en_US: description + zh_Hans: 描述 + ja_JP: 説明 + pt_BR: descrição + llm: description +output_schema: + type: object + properties: + name: + type: string +``` + +This example defines a simple tool with an `output_schema` containing a `name` field that can be referenced in `workflow`. Note that you still need to return a variable in the tool's implementation code for actual use, otherwise it will return `None`. diff --git a/en/policies/agreement/README.mdx b/en/policies/agreement/README.mdx new file mode 100644 index 00000000..4eaeb12f --- /dev/null +++ b/en/policies/agreement/README.mdx @@ -0,0 +1,27 @@ +--- +title: User Agreement +--- + + +### Terms of Service & Privacy Policy + +You can review the Terms of Service and Privacy Policy applicable to using Dify.AI via the links below: + +* [Terms of Service](https://dify.ai/terms) +* [Privacy Policy](https://dify.ai/privacy) + +### Compliance Certifications + +Dify.AI has obtained the following certifications: + +* **SOC 2 Type I** +* **SOC 2 Type II** +* **ISO 27001:2022 Certification** +* **GDPR Data Protection Agreement (DPA)** + +For instructions on how to download and check these compliance certificates, please refer to the relevant documentation. + + + get-compliance-report.md + + diff --git a/en/policies/agreement/get-compliance-report.mdx b/en/policies/agreement/get-compliance-report.mdx new file mode 100644 index 00000000..b4dc3d1b --- /dev/null +++ b/en/policies/agreement/get-compliance-report.mdx @@ -0,0 +1,91 @@ +--- +description: 'Author: Yongle, Allen' +--- + +--- +title: Get Compliance Report +--- + + +From the moment Dify.AI launched its product, it received inquiries from individual developers and enterprise users worldwide regarding **whether Dify.AI meets information security and data privacy compliance requirements.** Consequently, the team has strictly followed industry standards from the design phase onward, gradually establishing a comprehensive information security and data privacy compliance management system. + +Dify.AI has officially obtained **SOC 2 Type I, SOC 2 Type II, ISO 27001:2022, and GDPR certifications**, demonstrating that the product has reached internationally leading standards in data security, privacy protection, and compliance. This milestone further underscores Dify.AI's unwavering commitment to user data security. + +If you are using Dify’s cloud version as part of your vendor security evaluation, click the **top-right corner** of the page, select **Compliance**, and download the necessary reports to review Dify's compliance and certification documents. + +For Enterprise customers, if you want to check the compliance certificates and reports, please contact your account representative to initiate the appropriate business process. + +### Compliance Reports Availability + +Different team tiers have access to the following compliance certifications: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Certification TypeFree / SandboxProfessionalEnterprise
SOC 2 Type I Report-
SOC 2 Type II Report--
ISO 27001:2022 Certificate--
GDPR Data Protection Agreement
+> For access to higher-level compliance certifications, please upgrade your team on the [Pricing](https://dify.ai/pricing) page. + +The following explains how to obtain various compliance certification reports. + +#### **SOC 2 Type I Report** + +A **SOC 2 Type I** report is a third-party audit report that evaluates and confirms the design and implementation of an organization's security controls at a specific point in time. SOC 2, which stands for **System and Organization Controls 2**, is a set of criteria for managing and protecting data based on five trust service principles: **Security, Availability, Processing Integrity, Confidentiality, and Privacy**. The Type I report specifically assesses whether an organization has appropriate controls in place to meet the relevant criteria at the time of the audit. + +Only team owners can download Dify's SOC 2 Type I report via **Top-right → Compliance**. + +![SOC 2 Type I Report](https://assets-docs.dify.ai/2025/02/552f4358bea904c4bf12a131a410916c.png) + +#### **SOC 2 Type II Report** + +A **SOC 2 Type II** report is a third-party audit report that evaluates and confirms the design and implementation of an organization's security controls over a specified period. SOC 2, which stands for **System and Organization Controls 2**, is a set of criteria for managing and protecting data based on five trust service principles: **Security, Availability, Processing Integrity, Confidentiality, and Privacy**. The Type II report provides detailed insights into how well an organization’s security controls work over time, giving stakeholders confidence that the organization consistently adheres to industry standards for managing sensitive data. + +Only team owners can download Dify's SOC 2 Type I report via **Top-right → Compliance**. + +![SOC 2 Type II Report](https://assets-docs.dify.ai/2025/02/06975e86faba9928a11c962a57aa454f.png) + +**ISO 27001: 2022 Certificate** + +The **ISO 27001:2022** certificate is an internationally recognized standard for **information security management systems (ISMS)**. It is part of the **ISO/IEC 27000 family of standards**, which focuses on protecting sensitive information through a comprehensive set of security controls. The ISO 27001:2022 standard is designed to help organizations establish, implement, maintain, and continually improve their information security management system. The **2022** version of the standard reflects the latest updates and best practices in information security, aligning with current security challenges and technological advancements. It provides a systematic approach to managing sensitive company information, ensuring its confidentiality, integrity, and availability through the implementation of risk management and security controls. + +![ISO 27001: 2022 Certificate](https://assets-docs.dify.ai/2025/02/7aaf7771af28bf7182bf7fda40c7b5bb.png) + +#### **GDPR Data Protection Agreement** + +A **Data Protection Agreement (DPA)** is a legally binding contract between a **data controller** and a **data processor** under the General Data Protection Regulation (**GDPR**). The GDPR, which came into effect in May 2018, sets out the legal framework for data protection within the European Union (EU), and applies to organizations that process personal data of EU residents. A DPA is required when a data controller engages a third-party processor to handle personal data, ensuring that both parties are in compliance with GDPR obligations and safeguarding the rights of individuals whose data is being processed. + +**Everyone** can Download Dify's DPA in our official website. + +![DPA](https://assets-docs.dify.ai/2025/02/291cddb3bf9dfd8c92967eabb87b95a0.png) + diff --git a/en/policies/open-source.mdx b/en/policies/open-source.mdx new file mode 100644 index 00000000..433408d0 --- /dev/null +++ b/en/policies/open-source.mdx @@ -0,0 +1,8 @@ +--- +title: License +--- + + +Dify's community edition is open-source and licensed under an Apache 2.0-based license, with additional conditions. Please refer to the [LICENSE](https://github.com/langgenius/dify/blob/main/LICENSE) file for more details. + +Any issues or questions about the license should be directed to [business@dify.ai](mailto:business@dify.ai). diff --git a/en/user-guide/build-app/flow-app/create-flow-app.mdx b/en/user-guide/build-app/flow-app/create-flow-app.mdx deleted file mode 100644 index 21a44739..00000000 --- a/en/user-guide/build-app/flow-app/create-flow-app.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: Creating Applications -version: 'English' ---- - -## Chatflow - -**Use Cases:** - -Suitable for conversational scenarios, including customer service, semantic search, and other dialogue-based applications that require multiple logical steps when constructing responses. The distinctive feature of this type of application is its support for multi-round conversational interactions with generated results, allowing for adjustment of the generated content. - -Common interaction path: Give instructions → Generate content → Multiple discussions about the content → Regenerate results → End. - -On the "Studio" page, click "Create Blank Application" on the left, then select "Workflow Orchestration" under "Chat Assistant." - -![](/en-us/img/8eb5a12939c298bc7cb9a778d10d42db.png) - -## Workflow - -**Use Cases:** - -Suitable for automation and batch processing scenarios, ideal for high-quality translation, data analysis, content generation, email automation, and similar applications. This type of application does not support multi-round conversational interactions with generated results. - -Common interaction path: Give instructions → Generate content → End - -On the "Studio" page, click "Create Blank Application" on the left, then select "Workflow" to complete the creation. - -![Workflow Entry](/images/assets/workflow.png) - -## Differences Between the Two Application Type - -1. The End node belongs to Workflow as a termination node and can only be selected at the end of the process. -2. The Answer node belongs to Chatflow, used for streaming text content output and supports output during intermediate steps of the process. -3. Chatflow has built-in chat memory (Memory) for storing and transmitting historical messages from multiple rounds of conversation, which can be enabled in LLM, question classification, and other nodes. Workflow has no Memory-related configuration and cannot be enabled. -4. Chatflow's start node built-in variables include: `sys.query`, `sys.files`, `sys.conversation_id`, `sys.user_id`. Workflow's start node built-in variables include: `sys.files`, `sys.user_id` \ No newline at end of file diff --git a/en/user-guide/build-app/flow-app/nodes/end.mdx b/en/user-guide/build-app/flow-app/nodes/end.mdx deleted file mode 100644 index 18febd60..00000000 --- a/en/user-guide/build-app/flow-app/nodes/end.mdx +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: End ---- - -### 1 Definition - -Define the final output content of a workflow. Every workflow needs at least one end node after complete execution to output the final result. - -The end node is a termination point in the process; no further nodes can be added after it. In a workflow application, results are only output when the end node is reached. If there are conditional branches in the process, multiple end nodes need to be defined. - -The end node must declare one or more output variables, which can reference any upstream node's output variables. - -End nodes are not supported within Chatflow. - -*** - -### 2 Scenarios - -In the following long story generation workflow, the variable `Output` declared by the end node is the output of the upstream code node. This means the workflow will end after the Code node completes execution and will output the execution result of Code. - -![](/images/assets/end-answer.png) - -**Single Path Execution Example:** - -![](/images/assets/single-path-execution.png) - -**Multi-Path Execution Example:** - -![](/en-us/images/assets/end-3.png) diff --git a/en/user-guide/build-app/flow-app/nodes/iteration.mdx b/en/user-guide/build-app/flow-app/nodes/iteration.mdx deleted file mode 100644 index 5f24b177..00000000 --- a/en/user-guide/build-app/flow-app/nodes/iteration.mdx +++ /dev/null @@ -1,163 +0,0 @@ ---- -title: Iteration ---- - -### Definition - -Execute multiple steps on an array until all results are output. - -The iteration step performs the same steps on each item in a list. To use iteration, ensure that the input value is formatted as a list object. The iteration node allows AI workflows to handle more complex processing logic. It is a user-friendly version of the loop node, making some compromises in customization to allow non-technical users to quickly get started. - -*** - -### Scenarios - -#### **Example 1: Long Article Iteration Generator** - -![Long Story Generator](/images/assets/long-article-iteration-generator.png) - -1. Enter the story title and outline in the **Start Node**. -2. Use a **Generate Subtitles and Outlines Node** to use LLM to generate the complete content from user input. -3. Use a **Extract Subtitles and Outlines Node** to convert the complete content into an array format. -4. Use an **Iteration Node** to wrap an **LLM Node** and generate content for each chapter through multiple iterations. -5. Add a **Direct Answer Node** inside the iteration node to achieve streaming output after each iteration. - -**Detailed Configuration Steps** - -1. Configure the story title (title) and outline (outline) in the **Start Node**. - -![Start Node Configuration](/images/assets/workflow-start-node.png) - -2. Use a **Generate Subtitles and Outlines Node** to convert the story title and outline into complete text. - -![Template Node](/images/assets/workflow-generate-subtitles-node.png) - -3. Use a **Extract Subtitles and Outlines Node** to convert the story text into an array (Array) structure. The parameter to extract is `sections`, and the parameter type is `Array[Object]`. - -![Parameter Extraction](/images/assets/workflow-extract-subtitles-and-outlines.png) - - -The effectiveness of parameter extraction is influenced by the model's inference capability and the instructions given. Using a model with stronger inference capabilities and adding examples in the **instructions** can improve the parameter extraction results. - - -4. Use the array-formatted story outline as the input for the iteration node and process it within the iteration node using an **LLM Node**. - -![Configure Iteration Node](/images/assets/workflow-iteration-node.png) - -Configure the input variables `GenerateOverallOutline/output` and `Iteration/item` in the LLM Node. - -![Configure LLM Node](/images/assets/workflow-iteration-llm-node.png) - - -Built-in variables for iteration: `items[object]` and `index[number]`. - -`items[object]` represents the input item for each iteration; - -`index[number]` represents the current iteration round; - - -1. Configure a **Direct Reply Node** inside the iteration node to achieve streaming output after each iteration. - -![Configure Answer Node](/images/assets/workflow-configure-anwer-node.png) - -6. Complete debugging and preview. - -![Generate by Iterating Through Story Chapters](/images/assets/iteration-node-iteration-through-story-chapters.png) - -#### **Example 2: Long Article Iteration Generator (Another Arrangement)** - -![](/images/assets/iteration-node-iteration-long-article-iteration-generator.png) - -* Enter the story title and outline in the **Start Node**. -* Use an **LLM Node** to generate subheadings and corresponding content for the article. -* Use a **Code Node** to convert the complete content into an array format. -* Use an **Iteration Node** to wrap an **LLM Node** and generate content for each chapter through multiple iterations. -* Use a **Template Conversion** Node to convert the string array output from the iteration node back to a string. -* Finally, add a **Direct Reply Node** to directly output the converted string. - -### What is Array Content - -A list is a specific data type where elements are separated by commas and enclosed in `[` and `]`. For example: - -**Numeric:** - -``` -[0,1,2,3,4,5] -``` - -**String:** - -``` -["Monday", "Tuesday", "Wednesday", "Thursday"] -``` - -**JSON Object:** - -``` -[ - { - "name": "Alice", - "age": 30, - "email": "alice@example.com" - }, - { - "name": "Bob", - "age": 25, - "email": "bob@example.com" - }, - { - "name": "Charlie", - "age": 35, - "email": "charlie@example.com" - } -] -``` - -*** - -### Nodes Supporting Array Return - -* Code Node -* Parameter Extraction -* Knowledge Base Retrieval -* Iteration -* Tools -* HTTP Request - -### How to Obtain Array-Formatted Content - -**Return Using the CODE Node** - -![Parameter Extraction](/images/assets/workflow-extract-subtitles-and-outlines.png) - -**Return Using the Parameter Extraction Node** - -![Parameter Extraction](/images/assets/workflow-parameter-extraction-node.png) - -### How to Convert an Array to Text - -The output variable of the iteration node is in array format and cannot be directly output. You can use a simple step to convert the array back to text. - -**Convert Using a Code Node** - -![Code Node Conversion](/images/assets/iteration-code-node-convert.png) - -CODE Example: - -```python -def main(articleSections: list): - data = articleSections - return { - "result": "/n".join(data) - } -``` - -**Convert Using a Template Node** - -![Template Node Conversion](/images/assets/workflow-template-node.png) - -CODE Example: - -```django -{{ articleSections | join("/n") }} -``` diff --git a/en/user-guide/build-app/flow-app/nodes/start.mdx b/en/user-guide/build-app/flow-app/nodes/start.mdx deleted file mode 100644 index e918920a..00000000 --- a/en/user-guide/build-app/flow-app/nodes/start.mdx +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Start ---- - -#### Definition - -The **“Start”** node is a critical preset node in the [Chatflow / Workflow](../) application. It provides essential initial information, such as user input and [uploaded files](/en-us/user-guide/build-app/flow-app/file-upload), to support the normal flow of the application and subsequent workflow nodes. - -#### Configuring the Node - -On the Start node's settings page, you'll find two sections: **"Input Fields"** and preset **[System Variables](/en-us/user-guide/build-app/flow-app/variables#system-variables)**. - -![Chatflow and Workflow](/images/assets/chatflow-workflow.png) - -#### Input Field - -Input field is configured by application developers to prompt users for additional information. - -For example, in a weekly report application, users might be required to provide background information such as name, work date range, and work details in a specific format. This preliminary information helps the LLM generate higher quality responses. - -Six types of input variables are supported, all of which can be set as required: - -* **Text:** Short text, filled in by the user, with a maximum length of 256 characters. -* **Paragraph:** Long text, allowing users to input longer content. -* **Select:** Fixed options set by the developer; users can only select from preset options and cannot input custom content. -* **Number:** Only allows numerical input. -* **Single File:** Allows users to upload a single file. Supports document types, images, audio, video, and other file types. Users can upload locally or paste a file URL. For detailed usage, refer to [File Upload](/en-us/user-guide/build-app/flow-app/file-upload). -* **File List:** Allows users to batch upload files. Supports document types, images, audio, video, and other file types. Users can upload locally or paste file URLs. For detailed usage, refer to [File Upload](/en-us/user-guide/build-app/flow-app/file-upload). - - -Dify's built-in document extractor node can only process certain document formats. For processing images, audio, or video files, refer to External Data Tools to set up corresponding file processing nodes. - -Once configured, users will be guided to provide necessary information to the LLM before using the application. More information will help to improve the LLM's question-answering efficiency. - -### System Variables - -System variables are system-level parameters preset in Chatflow / Workflow applications that can be globally read by other nodes within the application. They are typically used in advanced development scenarios, such as building multi-round conversation applications, collecting application logs and monitoring, and recording usage behavior across different applications and users. - -**Workflow** - -Workflow type applications provide the following system variables: - -| Variable Name | Data Type | Description | Notes | -|---------|--------|------|------| -| `sys.files` [LEGACY] | Array[File] | File parameter, stores images uploaded by users when initially using the application | Image upload feature needs to be enabled in "Features" at the top right of the application configuration page | -| `sys.user_id` | String | User ID, system automatically assigns a unique identifier to each user when using the workflow application to distinguish different conversation users | | -| `sys.app_id` | String | Application ID, system assigns a unique identifier to each Workflow application to distinguish different applications and record basic information of the current application through this parameter | For users with development capabilities to distinguish and locate different Workflow applications using this parameter | -| `sys.workflow_id` | String | Workflow ID, used to record all node information contained in the current Workflow application | For users with development capabilities to track and record node information within the Workflow | -| `sys.workflow_run_id` | String | Workflow application run ID, used to record the running status of the Workflow application | For users with development capabilities to track application run history | - -![](/en-us/img/cb39be409f0037549d45f4b7d05aa9ce.png) - -**Chatflow** - -Chatflow type applications provide the following system variables: - -| Variable Name | Data Type | Description | Notes | -|---------|--------|------|------| -| `sys.query` | String | Initial content entered by users in the dialogue box | | -| `sys.files` | Array[File] | Images uploaded by users in the dialogue box | Image upload feature needs to be enabled in "Features" at the top right of the application configuration page | -| `sys.dialogue_count` | Number | Number of conversation rounds when users interact with Chatflow type applications. Count automatically increases by 1 after each round of dialogue, can be combined with if-else nodes for rich branching logic. For example, reviewing and analyzing conversation history at round X | | -| `sys.conversation_id` | String | Unique identifier for dialogue interaction sessions, grouping all related messages into the same conversation, ensuring LLM maintains continuous dialogue on the same topic and context | | -| `sys.user_id` | String | Unique identifier assigned to each application user to distinguish different conversation users | | -| `sys.app_id` | String | Application ID, system assigns a unique identifier to each Workflow application to distinguish different applications and record basic information of the current application through this parameter | For users with development capabilities to distinguish and locate different Workflow applications using this parameter | -| `sys.workflow_id` | String | Workflow ID, used to record all node information contained in the current Workflow application | For users with development capabilities to track and record node information within the Workflow | -| `sys.workflow_run_id` | String | Workflow application run ID, used to record the running status of the Workflow application | For users with development capabilities to track application run history | - -![](/en-us/img/233efef6802ae700489f3ab3478bca6b.png) diff --git a/en/user-guide/build-app/flow-app/nodes/tools.mdx b/en/user-guide/build-app/flow-app/nodes/tools.mdx deleted file mode 100644 index a793f114..00000000 --- a/en/user-guide/build-app/flow-app/nodes/tools.mdx +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: Tools ---- - -### Definition - -The workflow provides a rich selection of tools, categorized into three types: - -* **Built-in Tools**: Tools provided by Dify. -* **Custom Tools**: Tools imported or configured via the OpenAPI/Swagger standard format. -* **Workflows**: Workflows that have been published as tools. - -Before using built-in tools, you may need to **authorize** the tools. - -If built-in tools do not meet your needs, you can create custom tools in the **Dify menu navigation -- Tools** section. - -You can also orchestrate a more complex workflow and publish it as a tool. - -![Tool Selection](/images/assets/workflow-tool.png) - -![Configuring Google Search Tool to Retrieve External Knowledge](/images/assets/workflow-google-search-tool.png) - -Configuring a tool node generally involves two steps: - -1. Authorizing the tool/creating a custom tool/publishing a workflow as a tool. -2. Configuring the tool's input and parameters. - -For more information on how to create custom tools and configure them, please refer to the [Tool Configuration Guide](/en-us/user-guide/tools/introduction). diff --git a/en/user-guide/build-app/flow-app/shotcut-key.mdx b/en/user-guide/build-app/flow-app/shotcut-key.mdx deleted file mode 100644 index c5b3cb0d..00000000 --- a/en/user-guide/build-app/flow-app/shotcut-key.mdx +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Shotcut Key -version: 'English' ---- - -The Chatflow / Workflow application orchestration page supports the following shortcut keys to help you improve the efficiency of orchestration nodes. - -| Windows | macOS | Explanation | -| ---------------- | ------------------- | ------------------------------ | -| Ctrl + C | Command + C | Copy nodes | -| Ctrl + V | Command + V | Paste nodes | -| Ctrl + D | Command + D | Duplicate nodes | -| Ctrl + O | Command + O | Organize nodes | -| Ctrl + Z | Command + Z | Undo | -| Ctrl + Y | Command + Y | Redo | -| Ctrl + Shift + Z | Command + Shift + Z | Redo | -| Ctrl + 1 | Command + 1 | Canvas fits view | -| Ctrl + (-) | Command + (-) | Canvas zooms out | -| Ctrl + (=) | Command + (=) | Canvas zooms in | -| Shift + 1 | Shift + 1 | Resets canvas view to 100% | -| Shift + 5 | Shift + 5 | Scales canvas to 50% | -| H | H | Canvas toggles to Hand mode | -| V | V | Canvas toggles to Pointer mode | -| Delete/Backspace | Delete/Backspace | Delete selected nodes | -| Alt + R | Option + R | Workflow starts to run | - diff --git a/en/user-guide/build-app/flow-app/variables.mdx b/en/user-guide/build-app/flow-app/variables.mdx deleted file mode 100644 index 13193400..00000000 --- a/en/user-guide/build-app/flow-app/variables.mdx +++ /dev/null @@ -1,178 +0,0 @@ ---- -title: Variables -version: 'English' ---- - -Workflow and Chatflow type applications are composed of independent nodes. Most nodes have input and output items, but the input information for each node varies, and the outputs from different nodes also differ. - -How can we use a fixed symbol to **represent dynamically changing content?** Variables, acting as dynamic data containers, can store and transmit variable content, being referenced across different nodes to achieve flexible communication of information between nodes. - -### **System Variables** - -System variables are system-level parameters preset in Chatflow / Workflow applications that can be globally read by other nodes. System-level variables all begin with `sys`. - -#### Workflow - -Workflow type applications provide the following system variables: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Variable NameData TypeDescriptionNotes

sys.files

[LEGACY]

Array[File]File parameter, stores images uploaded by users when initially using the applicationImage upload feature needs to be enabled in "Features" at the top right of the application configuration page
sys.user_idStringUser ID, system automatically assigns a unique identifier to each user when using the workflow application to distinguish different conversation users
sys.app_idStringApplication ID, system assigns a unique identifier to each Workflow application to distinguish different applications and record basic information of the current application through this parameterFor users with development capabilities to distinguish and locate different Workflow applications using this parameter
sys.workflow_idStringWorkflow ID, used to record all node information contained in the current Workflow applicationFor users with development capabilities to track and record node information within the Workflow
sys.workflow_run_idStringWorkflow application run ID, used to record the running status of the Workflow applicationFor users with development capabilities to track application run history
- -![](/en-us/img/c405efa31fd5708542fdc3bd7c0cb708.png) - -#### Chatflow - -Chatflow type applications provide the following system variables: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Variable NameData TypeDescriptionNotes
sys.queryStringInitial content entered by users in the dialogue box
sys.filesArray[File]Images uploaded by users in the dialogue boxImage upload feature needs to be enabled in "Features" at the top right of the application configuration page
sys.dialogue_countNumber

Number of conversation rounds when users interact with Chatflow type applications. Count automatically increases by 1 after each round of dialogue, can be combined with if-else nodes for rich branching logic.

For example, reviewing and analyzing conversation history at round X

sys.conversation_idStringUnique identifier for dialogue interaction sessions, grouping all related messages into the same conversation, ensuring LLM maintains continuous dialogue on the same topic and context
sys.user_idStringUnique identifier assigned to each application user to distinguish different conversation users
sys.app_idStringApplication ID, system assigns a unique identifier to each Workflow application to distinguish different applications and record basic information of the current application through this parameterFor users with development capabilities to distinguish and locate different Workflow applications using this parameter
sys.workflow_idStringWorkflow ID, used to record all node information contained in the current Workflow applicationFor users with development capabilities to track and record node information within the Workflow
sys.workflow_run_idStringWorkflow application run ID, used to record the running status of the Workflow applicationFor users with development capabilities to track application run history
- -![](/en-us/img/e387366fe2643688d57e6b9a69eacb1b.png) - -### Environment Variables - -**Environment variables are used to protect sensitive information involved in workflows**, such as API keys and database passwords used when running workflows. They are stored in the workflow rather than in the code to facilitate sharing across different environments. - -![](/en-us/img/en-env-variable.png) - -The following three data types are supported: - -* String -* Number -* Secret - -Environment variables have the following characteristics: - -* Environment variables can be globally referenced in most nodes -* Environment variable names cannot be duplicated -* Environment variables are read-only and cannot be written to - -### Conversation Variables - -> Conversation variables are designed for multi-round conversation scenarios. Since Workflow type applications have linear and independent interactions without multiple conversation exchanges, conversation variables are only applicable to Chatflow type (Chat Assistant → Workflow Orchestration) applications. - -**Conversation variables allow application developers to specify particular information that needs to be temporarily stored within the same Chatflow session, ensuring this information can be referenced across multiple rounds of dialogue within the current workflow**. This includes context, files uploaded to the dialogue box (coming soon), and user preferences input during conversations. It's like providing LLM with a "memo" that can be checked at any time, avoiding information discrepancies due to LLM memory errors. - -For example, you can store the language preference input by users in the first round of conversation in conversation variables, and LLM will reference this information when responding and use the specified language to reply to users in subsequent conversations. - -![](/en-us/img/conversation-var.png) - -**Conversation variables** support the following six data types: - -* String -* Number -* Object -* Array[string] -* Array[number] -* Array[object] - -**Conversation variables** have the following characteristics: - -* Conversation variables can be globally referenced in most nodes -* Writing to conversation variables requires using the [Variable Assigner](./nodes/variable-assigner) node -* Conversation variables are readable and writable - -For more details about how to use conversation variables with variable assigner nodes, please refer to the [Variable Assigner](./nodes/variable-assigner) node documentation. - -### Notes - -* To avoid variable name duplication, node names cannot be duplicated -* Node output variables are generally fixed variables and cannot be edited diff --git a/en/user-guide/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx b/en/user-guide/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx deleted file mode 100644 index b5cb90c2..00000000 --- a/en/user-guide/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: Knowledge Base and Document Maintenance ---- - -## Knowledge Base Management - -> The knowledge base page is accessible only to the team owner, team administrators, and users with editor permissions. - -On the Dify team homepage, click the "Knowledge Base" tab at the top, select the knowledge base you want to manage, then click **Settings** in the left navigation panel to make adjustments. You can modify the knowledge base name, description, visibility permissions, indexing mode, embedding model, and retrieval settings. - - - Knowledge Base Settings - - -**Knowledge Base Name**: Used to distinguish among different knowledge bases. - -**Knowledge Description**: Used to describe the information represented by the documents in the knowledge base. - -**Visibility Permissions**: Defines access control for the knowledge base with three levels: - -1. **"Only Me"**: Restricts access to the knowledge base owner. -2. **"All team members"**: Grants access to every member of the team. -3. **"Partial team members"**: Allows selective access to specific team members. - -Users without appropriate permissions cannot access the knowledge base. When granting access to team members (options 2 or 3), authorized users receive full permissions, including view, edit, and delete rights for the knowledge base content. - -**Indexing Mode**: For detailed explanations, please [refer to the documentation](https://docs.dify.ai/guides/knowledge-base/create-knowledge-and-upload-documents#5-indexing-method). - -**Embedding Model**: Allows you to modify the embedding model for the knowledge base. Changing the embedding model will re-embed all documents in the knowledge base, and the original embeddings will be deleted. - -**Retrieval Settings**: For detailed explanations, please [refer to the documentation](https://docs.dify.ai/guides/knowledge-base/create-knowledge-and-upload-documents#6-retrieval-settings). - -*** - -#### View Linked Applications in the Knowledge Base - -On the left side of the knowledge base, you can see all linked Apps. Hover over the circular icon to view the list of all linked apps. Click the jump button on the right to quickly browser them. - -![Viewing the Linked Apps](https://assets-docs.dify.ai/2024/12/28899b9b0eba8996f364fb74e5b94c7f.png) - -You can manage your knowledge base documents either through a web interface or via an API. - -#### Maintain Knowledge Documents - -You can administer all documents and their corresponding chunks directly in the knowledge base. For more details, refer to the following documentation: - - - Learn more about managing knowledge documents - - -#### Maintain Knowledge Base Via API - -Dify Knowledge Base provides a comprehensive set of standard APIs. Developers can use these APIs to perform routine management and maintenance tasks, such as adding, deleting, updating, and retrieving documents and chunks. For more details, refer to the following documentaiton: - - - Learn more about maintaining knowledge base through API - - -![Knowledge base API management](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/knowledge-base/02cf8bc06990606ff1d60b73ce7a82c8.png) diff --git a/en/workshop/README.mdx b/en/workshop/README.mdx new file mode 100644 index 00000000..11dfa440 --- /dev/null +++ b/en/workshop/README.mdx @@ -0,0 +1,11 @@ +--- +title: Introduction +--- + +Welcome to the Dify Workshop! These tutorials are designed for beginners who want to start learning Dify from scratch. Whether you have programming or AI-related background knowledge or not, we will guide you step by step to master the core concepts and usage of Dify without skipping any details. + +We will help you understand Dify through a series of experiments. Each experiment will include detailed steps and explanations to ensure you can easily follow and grasp the content. We will interweave knowledge teaching in the experiments, allowing you to learn in practice and gradually build a comprehensive understanding of Dify. + +No need to worry about any prerequisites! We will start from the most basic concepts and gradually guide you into more advanced topics. Whether you are a complete beginner or have some programming experience but want to learn AI technology, this tutorial will provide you with everything you need. + +Let's embark on this learning journey together and explore the endless possibilities of Dify! diff --git a/en/workshop/basic/build-ai-image-generation-app.mdx b/en/workshop/basic/build-ai-image-generation-app.mdx new file mode 100644 index 00000000..56ef7c4b --- /dev/null +++ b/en/workshop/basic/build-ai-image-generation-app.mdx @@ -0,0 +1,183 @@ +--- +title: How to Build an AI Image Generation App +--- + +> Author: Steven Lynn. Dify Technical Writer. + +With the rise of image generation, many excellent image generation products have emerged, such as Dall-e, Flux, Stable Diffusion, etc. + +In this article, you will learn how to develop an AI image generation app using Dify. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/05ff829cf382e82c9ece2676032d2383.png) + +## You Will Learn + +* Methods for building an Agent using Dify +* Basic concepts of Agent +* Fundamentals of prompt engineering +* Tool usage +* Concepts of large model hallucinations + +## 1. Setting Stablility API Key + +[Click here](https://platform.stability.ai/account/keys) to go to the Stability API key management page. + +If you haven't registered yet, you will be asked to register before entering the API management page. + +After entering the management page, click `copy` to copy the key. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/f73d82756bdf93c8863ac0b1f55fa5af.png) + +Next, you need to fill in the key in [Dify - Tools - Stability](https://cloud.dify.ai/tools) by following these steps: + +* Log in to Dify +* Enter Tools +* Select Stability +* Click `Authorize` + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/bcc961ffc8a341c8ba3137e475072f99.png) + +* Fill in the key and save + +## 2. Configure Model Providers + +To optimize interaction, we need an LLM to concretize user instructions, i.e., to write prompts for generating images. Next, we will configure model providers in Dify following these steps. + +The Free version of Dify provides 200 free OpenAI message credits. + +If the message credits are insufficient, you can customize other model providers by following the steps in the image below: + +Click **Your Avatar - Settings - Model Provider** + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/4b4102f9027e2bda3fc520eaa8ea2354.png) + +If you haven't found a suitable model provider, the groq platform provides free call credits for LLMs like Llama. + +Log in to [groq API Management Page](https://console.groq.com/keys) + +Click **Create API Key**, set a desired name, and copy the API Key. + +Back to **Dify - Model Providers**, select **groqcloud**, and click **Setup**. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/0fda6e81dc23974576ddc21bda96e26d.png) + +Paste the API Key and save. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/b250952afad12b39613aa27da5335fa3.png) + +## 3. Build an Agent + +Back to **Dify - Studio**, select **Create from Blank**. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/3b86940eadfe0ce14d175a9bb80fe5a9.png) + +In this experiment, we only need to understand the basic usage of Agent. + + +**What is an Agent** + +An Agent is an AI system that simulates human behavior and capabilities. It interacts with the environment through natural language processing, understands input information, and generates corresponding outputs. The Agent also has " +className="mx-auto" +alt="" +/> + +Select **Agent**, fill in the name. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/139ac0d2f4a10e2ec0e191457f4687a1.png) + +Next, you will enter the Agent orchestration interface as shown below. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/9045dbab8600e9c9d9632add787f26a6.png) + +Select the LLM. Here we use Llama-3.1-70B provided by groq as an example: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/47edc14c1d3c68eeb4ee4807b35df185.png) + +Select Stability in **Tools**: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/6e1c3dd63925fd9ba60568deb2602044.png" alt="Image 1" /> + + + + +### Write Prompts + +Prompts are the soul of the Agent and directly affect the output effect. Generally, the more specific the prompts, the better the output, but overly lengthy prompts can also lead to negative effects. + +The engineering of adjusting prompts is called Prompt Engineering. + +In this experiment, you don't need to worry about not mastering Prompt Engineering; we will learn it step by step later. + +Let's start with the simplest prompts: + +``` +Draw the specified content according to the user's prompt using stability_text2image. +``` + +Each time the user inputs a command, the Agent will know this system-level instruction, thus understanding that when executing a user's drawing task, it needs to call stability tool. + +For example: Draw a girl holding an open book. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/05ff829cf382e82c9ece2676032d2383.png) + +### Don't want to write prompts? Of course you can! + +Click **Generate** in the upper right corner of Instructions. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/426a416e468b9f495eb13ac2986acdca.png) + +Enter your requirements in the **Instructions** and click **Generate**. The generated prompts on the right will show AI-generated prompts. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/d750983264182e7af5014d5df4477e31.png) + +However, to develop a good understanding of prompts, we should not rely on this feature in the early stages. + +## Publish + +Click the publish button in the upper right corner, and after publishing, select **Run App** to get a web page for an online running Agent. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/38a1bf752ca1fc71eccbbfd18046f5bc.png) + +Copy the URL of this web page to share with other friends. + +## Question 1: How to Specify the Style of Generated Images? + +We can add style instructions in the user's input command, for example: Anime style, draw a girl holding an open book. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/d2d883d887272786ee19d97894cbb307.png) + +But if we want set the default style to anime style, we can add it to the system prompt because we previously learned that the system prompt is known each time the user command is executed and has a higher priority. + +``` +Draw the specified content according to the user's prompt using stability_text2image, the picture is in anime style. +``` + +## Question 2: How to Reject Certain Requests from Some Users? + +In many business scenarios, we need to avoid outputting some unreasonable content, but LLMs are often "dumb" and will follow user instructions without question, even if the output content is wrong. This phenomenon of the model trying hard to answer users by fabricating false content is called **model hallucinations**. Therefore, we need the model to refuse user requests when necessary. + +Additionally, users may also ask some content unrelated to the business, and we also need the Agent to refuse such requests. + +We can use markdown format to categorize different prompts, writing the prompts that teach the Agent to refuse unreasonable content under the "Constraints" title. Of course, this format is just for standardization, and you can have your own format. + +``` +## Task +Draw the specified content according to the user's prompt using stability_text2image, the picture is in anime style. + +## Constraints +If the user requests content unrelated to drawing, reply: "Sorry, I don't understand what you're saying." +``` + +For example, let's ask: What's for dinner tonight? + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/06dcf569989d797919fbe49ab8d5cadc.png) + +In some more formal business scenarios, we can call a sensitive word library to refuse user requests. + +Add the keyword "dinner" in **Add Feature - Content Moderation**. When the user inputs the keyword, the Agent app outputs "Sorry, I don't understand what you're saying." + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/basic/828b27d1a873ff7b4b44f76d93229225.png) diff --git a/en/workshop/intermediate/article-reader.mdx b/en/workshop/intermediate/article-reader.mdx new file mode 100644 index 00000000..91774249 --- /dev/null +++ b/en/workshop/intermediate/article-reader.mdx @@ -0,0 +1,136 @@ +--- +title: Build An Article Reader Using File Upload +--- + + +> Author: Steven Lynn. Dify Technical Writer. + +In Dify, you can use the knowledge base to allow agent to obtain accurate information from a large amount of text content. However, in many cases, the local files provided are not large enough to warrant the use of the knowledge base. In such cases, you can use the file upload feature to directly provide local files as context for the LLM to read. + +In this experiment, we will build the article reader as a case study. This assistant will ask questions based on the uploaded document, helping users to read papers and other materials with those questions in mind. + +## You Will Learn + +* File upload usage +* Basic usage of Chatflow +* Prompt writing skill +* Iteration node usage +* Doc extractor and list operator usage + +## **Prerequisites** + +Create a Chatflow in Dify. Make sure you have added a model provider and have sufficient quota. + + +## **Adding Nodes** + +In this experiment, at least four types of nodes are required: start node, document extractor node, LLM node, and answer node. + +### **Start Node** + +In the start node, you need to add a file variable. File upload is supported in v0.10.0 Dify, allowing you to add files as variable. + +For more information on file upload, please read: [File Upload](../../guides/workflow/file-upload.md) + +In the start node, you need to add a file variable and check the document in the supported file types. + + + +Some readers might notice the `sys.files` in the system variables, which are files or file lists uploaded by users in the dialog box. + +The difference between creating your own file variables is that this feature requires enabling file upload in the functions and setting the upload file types, and each time a new file is uploaded in the dialog, this variable will be overwritten. + + + +Please choose the appropriate file upload method according to your business scenario. + +### **Doc Extractor** + +**LLM cannot read files directly.** This is a common misconception among many users when they first use file upload, as they might think simply using the file as a variable in an LLM node would work. However, in reality, the LLM reads nothing from file variables. + +Thus, Dify introduced the **doc extractor**, which can extract text from the file variable and output it as a text variable. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/bc4b1492bd10ef782457ec6a709997f9.png) + +### **LLM** + +In this experiment, two LLM nodes need to be designed: structure extraction and question generation. + +#### **Structure Extraction** + +The structure extraction node can extract the structure of the original text, summarizing key content. + +The prompts are as follow: + +``` +Read the following article content and perform the task +{{Result variable of the document extractor}} +# Task + +- **Main Objective**: Thoroughly analyze the structure of the article. +- **Objective**: Detail the content of each part of the article. +- **Requirements**: Analyze as detailed as possible. +- **Restrictions**: No specific format restrictions, but the analysis must be organized and logical. +- **Expected Output**: A detailed analysis of the article structure, including the main content and role of each part. + +# Reasoning Order + +- **Reasoning Part**: By carefully reading the article, identify and analyze its structure. +- **Conclusion Part**: Provide specific content and role for each part. + +# Output Format + +- **Analysis Format**: Each part should be listed in a headline format, followed by a detailed explanation of that part's content. +- **Structure Form**: Markdown, to enhance readability. +- **Specific Description**: The content and role of each part, including but not limited to the introduction, body, conclusion, citations, etc. +``` + +#### **Question Generation** + +The question generation node can summarize the issues of the article from the content summarized by the structure extraction node, assisting the reader in thinking through the questions during the reading process. + +The prompts are as follow: + +``` +Read the following article content and perform the task +{{Output of the structure extraction}} +# Task + +- **Main Objective**: Thoroughly read the above text, and propose as many questions as possible for each part of the article. +- **Requirements**: Questions should be meaningful and valuable, worthy of consideration. +- **Restrictions**: No specific restrictions. +- **Expected Output**: A series of questions for each part of the article, each question should have depth and thinking value. + +# Reasoning Order + +- **Reasoning Part**: Thoroughly read the article, analyze the content of each part, and consider the deep questions each part may raise. +- **Conclusion Part**: Pose meaningful and valuable questions, ensuring they provoke in-depth thought. + +# Output Format + +- **Format**: Each question should be listed separately, numbered. +- **Content**: Propose questions for each part of the article (such as introduction, background, methods, results, discussion, conclusion, etc.). +- **Quantity**: As many as possible, but each question should be meaningful and valuable. +``` + +## **Question 1: Handling Multiple Uploaded Files** + +To handle multiple uploaded files, an iterative node is needed. + +The iterative node is similar to the while loop in many programming languages, except that Dify has no conditional restrictions, and the **input variable can only be of type `array` (list)**. The reason is that Dify will execute all the content in the list until it is done. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/cc9cbf8b718b8abbf84cd8649a08c1a3.png) + +Therefore, you need to adjust the file variable in the start node to an `array` type, i.e., a file list. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/8eff802e3e1e3da466c5dc9ac56c50f2.png) + +## **Question 2: Handling Specific Files from a File List** + +In Question 1, some readers might notice that Dify will process all files before ending the loop, while in some cases, only a part of the files need to be operated on, not all. For this issue, you can process the file list in Dify using the **list operation** node. List operations can operate on all array-type variables, not just file lists. + +For example, limit the analysis to only document-type files and sort the files to be processed in order of file names. + +Before the iterative node, add a list operation, adjust the **filter condiftion** and **order by**, then change the input of the iterative node to the output of the list operation node. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/287690e1fef87af270c0d5020d25d6cf.png) diff --git a/en/workshop/intermediate/customer-service-bot.mdx b/en/workshop/intermediate/customer-service-bot.mdx new file mode 100644 index 00000000..103c65c8 --- /dev/null +++ b/en/workshop/intermediate/customer-service-bot.mdx @@ -0,0 +1,186 @@ +--- +title: Building a Smart Customer Service Bot Using a Knowledge Base +--- + + +> Author: Steven Lynn, Dify Technical Writer + +In the last experiment, we learned the basic usage of file uploads. However, when the text we need to read exceeds the LLM's context window, we need to use a knowledge base. + +> **What is context?** +> +> The context window refers to the range of text that the LLM can "see" and "remember" when processing text. It determines how much previous text information the model can refer to when generating responses or continuing text. The larger the window, the more contextual information the model can utilize, and the generated content is usually more accurate and coherent. + +Previously, we learned about the concept of LLM hallucinations. In many cases, an LLM knowledge base allows the Agent to locate accurate information, thus accurately answering questions. It has applications in specific fields such as customer service and search tools. + +Traditional customer service bots are often based on keyword retrieval. When users input questions outside of the keywords, the bot cannot solve the problem. The knowledge base is designed to solve this problem, enabling semantic-level retrieval and reducing the burden on human agents. + +Before starting the experiment, remember that the core of the knowledge base is retrieval, not the LLM. The LLM enhances the output process, but the real need is still to generate answers. + +### What You Will Learn in This Experiment + +* Basic usage of Chatflow +* Usage of knowledge bases and external knowledge bases +* The concept of embeddings + +### Prerequisites + +#### Create an Application + +In Dify, select **Create from Blank - Chatflow.** + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/0147e0d6fa1412dcf38ff0b12d30e5fe.png) + +#### Add a Model Provider + +This experiment involves using embedding models. Currently, supported embedding model providers include OpenAI and Cohere. In Dify's model providers, those with the `TEXT EMBEDDING` label are supported. Ensure you have added at least one and have sufficient balance. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/ebfb40e8b80adb8d7e1096ee5da16fad.png) + +> **What is embedding?** +> +> "Embedding" is a technique that converts discrete variables (such as words, sentences, or entire documents) into continuous vector representations. +> +> Simply put, when we process natural language into data, we convert text into vectors. This process is called embedding. Vectors of semantically similar texts will be close together, while vectors of semantically opposite texts will be far apart. LLMs use this data for training, predicting subsequent vectors, and thus generating text. + +### Create a Knowledge Base + +Log in to Dify -> Knowledge -> Create Knowledge + + + +Dify supports three data sources: documents, Notion, and web pages. + +For local text files, note the file type and size limitations; syncing Notion content requires binding a Notion account; syncing a website requires using the **Jina** or **Firecrawl API**. + +We will start with a uploading local document as an example. + +#### Chunk Settings + +After uploading the document, you will enter the following page: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/0dab2d0a607d9486ae973d897b0c08bd.png) + +You can see a segmentation preview on the right. The default selection is automatic segmentation and cleaning. Dify will automatically divide the article into many paragraphs based on the content. You can also set other segmentation rules in the custom settings. + +#### Index Method + +Normally we prefer to select **High Quality**, but this will consume extra tokens. Selecting **Economic** will not consume any tokens. + +The community edition of Dify uses a Q\&A segmentation mode. Selecting the corresponding language can organize the text content into a Q\&A format, which requires additional token consumption. + +#### Embedding Model + +Please refer to the model provider's documentation and pricing information before use. + +Different embedding models are suitable for different scenarios. For example, Cohere's `embed-english` is suitable for English documents, and `embed-multilingual` is suitable for multilingual documents. + +#### Retrieval Settings + +Dify provides three retrieval functions: vector retrieval, full-text retrieval, and hybrid retrieval. Hybrid retrieval is the most commonly used. + +In hybrid retrieval, you can set weights or use a reranking model. When setting weights, you can set whether the retrieval should focus more on semantics or keywords. For example, in the image below, semantics account for 70% of the weight, and keywords account for 30%. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/6a1f2b4a6e1b63febdaee3e01c1d39a4.png) + +Clicking **Save and Process** will process the document. After processing, the document can be used in the application. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/fdc20eb804ec39a308726324f6b33f45.png) + +#### Syncing from a Website + +In many cases, we need to build a smart customer service bot based on help documentation. Taking Dify as an example, we can convert the [Dify help documentation](https://docs.dify.ai) into a knowledge base. + +Currently, Dify supports processing up to 50 pages. Please pay attention to the quantity limit. If exceeded, you can create a new knowledge base. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/8936a4f7952c7aefe5f9d58ee730883c.png) + +#### Adjusting Knowledge Base Content + +After the knowledge base has processed all documents, it is best to check the coherence of the segmentation in the knowledge base. Incoherence will affect the retrieval effect and needs to be manually adjusted. + +Click on the document content to browse the segmented content. If there is irrelevant content, you can disable or delete it. + + +If content is segmented into another paragraph, it also needs to be adjusted back. + +#### Recall Test + +In the document page of the knowledge base, click Recall Test in the left sidebar to input keywords to test the accuracy of the retrieval results. + +### Add Nodes + +Enter the created APP, and let's start building the smart customer service bot. + +#### Question Classification Node + +You need to use a question classification node to separate different user needs. In some cases, users may even chat about irrelevant topics, so you need to set a classification for this as well. + +To make the classification more accurate, you need to choose a better LLM, and the classification needs to be specific enough with sufficient distinction. + +Here is a reference classification: + +* User asks irrelevant questions +* User asks Dify-related questions +* User requests explanation of technical terms +* User asks about joining the community + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/627566df2b28b58ab84e53d3737c6927.png) + +#### Direct Reply Node + +In the question classification, "User asks irrelevant questions" and "User asks about joining the community" do not need LLM processing to reply. Therefore, you can directly connect a direct reply node after these two questions. + +"User asks irrelevant questions": + +You can guide the user to the help documentation, allowing them to try to solve the problem themselves, for example: + +``` +I'm sorry, I can't answer your question. If you need more help, please check the [help documentation](https://docs.dify.ai). +``` + +Dify supports Markdown formatted text output. You can use Markdown to enrich the text format in the output. You can even insert images in the text using Markdown. + +#### Knowledge Retrieval Node + +Add a knowledge retrieval node after "User asks Dify-related questions" and check the knowledge base to be used. + + + +#### LLM Node + +In the next node after the knowledge retrieval node, you need to select an LLM node to organize the content retrieved from the knowledge base. + +The LLM needs to adjust the reply based on the user's question to make the reply more appropriate. + +Context: You need to use the output of the knowledge retrieval node as the context of the LLM node. + +System prompt: Based on `{{context}}`, answer `{{user question}}`. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/0052ebf236d55dc0c143c5dbfe5f1e76.png) + +You can use `/` or `{` to reference variables in the prompt writing area. In variables, variables starting with `sys.` are system variables. Please refer to the help documentation for details. + +In addition, you can enable LLM memory to make the user's conversation experience more coherent. + +### Question 1: How to Connect External Knowledge Bases + +In the knowledge base function, you can connect external knowledge bases through external knowledge base APIs, such as the AWS Bedrock knowledge base. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/7bcfb95e806966a868885814f0d7dc35.png) + +For best practices on AWS Bedrock knowledge bases, please read: [how-to-connect-aws-bedrock.md](../../learn-more/use-cases/how-to-connect-aws-bedrock.md "mention") + +### Question 2: How to Manage Knowledge Bases Through APIs + +In both the community edition and SaaS version of Dify, you can add, delete, and query the status of knowledge bases through the knowledge base API. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/cda4458ccb9be9e1c3ec9821fb5c5f99.png) + +In the instance with the knowledge base deployed, go to **Knowledge Base -> API** and create an API key. Please keep the API key safe. + +### Question 3: How to Embed the Customer Service Bot into a Webpage + +After application deployment, select embed webpage, choose a suitable embedding method, and paste the code into the appropriate location on the webpage. + + diff --git a/en/workshop/intermediate/twitter-chatflow.mdx b/en/workshop/intermediate/twitter-chatflow.mdx new file mode 100644 index 00000000..51cbf36b --- /dev/null +++ b/en/workshop/intermediate/twitter-chatflow.mdx @@ -0,0 +1,165 @@ +--- +cover: ../../.gitbook/assets/%E7%94%BB%E6%9D%BF_1.png +coverY: 0 +--- + +--- +title: Generating analysis of Twitter account using Chatflow Agent +--- + + +## Introduction + +In Dify, you can use some crawler tools, such as Jina, which can convert web pages into markdown format that LLMs can read. + +Recently, [wordware.ai](https://www.wordware.ai/) has brought to our attention that we can use crawlers to scrape social media for LLM analysis, creating more interesting applications. + +However, knowing that X (formerly Twitter) stopped providing free API access on February 2, 2023, and has since upgraded its anti-crawling measures. Tools like Jina are unable to access X's content directly. + +> Starting February 9, we will no longer support free access to the Twitter API, both v2 and v1.1. A paid basic tier will be available instead 🧵 +> +> — Developers (@XDevelopers) [February 2, 2023](https://twitter.com/XDevelopers/status/1621026986784337922?ref\_src=twsrc%5Etfw) + +Fortunately, Dify also has an HTTP tool, which allows us to call external crawling tools by sending HTTP requests. Let's get started! + +## **Prerequisites** + +### Register Crawlbase + +Crawlbase is an all-in-one data crawling and scraping platform designed for businesses and developers. + +Moreover, using Crawlbase Scraper allows you to scrape data from social platforms like X, Facebook and Instagram. + +Click to register: [crawlbase.com](https://crawlbase.com) + +### Deploy Dify locally + +Dify is an open-source LLM app development platform. You can choose cloud service or deploy it locally using docker compose. + +In this article, If you don’t want to deploy it locally, register a free Dify Cloud sandbox account here: [https://cloud.dify.ai/signin](https://cloud.dify.ai/signin). + + +Dify Cloud Sandbox users get 200 free credits, equivalent to 200 GPT-3.5 messages or 20 GPT-4 messages. + + +The following are brief tutorials on how to deploy Dify: + +#### Clone Dify + +```bash +git clone https://github.com/langgenius/dify.git +``` + +#### **Start Dify** + +```bash +cd dify/docker +cp .env.example .env +docker compose up -d +``` + +### Configure LLM Providers + +Configure Model Provider in account setting: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/4b4102f9027e2bda3fc520eaa8ea2354.png) + +## Create a chatflow + +Now, let's get started on the chatflow. + +Click on `Create from Blank` to start: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/b2955735f5c122d8a2fc08ef13654239.png) + +The initialized chatflow should be like: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/baee341b771d1cd77780fd4845b467b2.png) + +## Add nodes to chatflow + +![The final chatflow looks like this](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/bad3185d9f2c92994c24de65a5414182.png) + +### Start node + +In start node, we can add some system variables at the beginning of a chat. In this article, we need a Twitter user’s ID as a string variable. Let’s name it `id` . + +Click on Start node and add a new variable: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/a041be2230364d7e729527f3f7af34d8.png) + +### Code node + +According to [Crawlbase docs](https://crawlbase.com/docs/crawling-api/scrapers/#twitter-profile), the variable `url` (this will be used in the following node) should be `https://twitter.com/` + `user id` , such as `https://twitter.com/elonmusk` for Elon Musk. + +To convert the user ID into a complete URL, we can use the following Python code to integrate the prefix `https://twitter.com/` with the user ID: + +```python +def main(id: str) -> dict: + return { + "url": "https://twitter.com/"+id, + } +``` + +Add a code node and select python, and set input and output variable names: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/e5523ba1f801f4009b74e7cf03e2ef2f.png) + +### HTTP request node + +Based on the [Crawlbase docs](https://crawlbase.com/docs/crawling-api/scrapers/#twitter-profile), to scrape a Twitter user’s profile in http format, we need to complete HTTP request node in the following format: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/13899d88abeb3b3be20c44d40565a5f9.png) + +Importantly, it is best not to directly enter the token value as plain text for security reasons, as this is not a good practice. Actually, in the latest version of Dify, we can set token values in **`Environment Variables`**. Click `env` - `Add Variable` to set the token value, so plain text will not appear in the node. + +Check [https://crawlbase.com/dashboard/account/docs](https://crawlbase.com/dashboard/account/docs) for your crawlbase API Key. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/c99b66ac8d30289615a8869bae5a6455.png) + +By typing `/` , you can easily insert the API Key as a variable. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/51f9350677acb396bad5841fa80c903c.png) + +Tap the start button of this node to check whether it works correctly: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/094b96e513169a47f1749e46e1357893.png) + +### LLM node + +Now, we can use LLM to analyze the result scraped by crawlbase and execute our command. + +The value `context` should be `body` from HTTP Request node. + +The following is a sample system prompt. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/46f4e15ac1e9d3ca3f47dc5bb921ff01.png) + +## Test run + +Click `Preview` to start a test run and input twitter user id in `id` + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/a25b122dfa14f0c65fcd3498ccf1898e.png) + +For example, I want to analyze Elon Musk's tweets and write a tweet about global warming in his tone. + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/workshop/intermediate/835a01082e74723138d9f97bee0c6c4b.png) + +Does this sound like Elon? lol + +Click `Publish` in the upper right corner and add it in your website. + +Have fun! + +## Lastly… + +### Other X(Twitter) Crawlers + +In this article, I’ve introduced crawlbase. It should be the cheapest Twitter crawler service available, but sometimes it cannot correctly scrape the content of user tweets. + +The Twitter crawler service used by [wordware.ai](http://wordware.ai) mentioned earlier is **Tweet Scraper V2**, but the subscription for the hosted platform **apify** is $49 per month. + +## Links + +* [X@dify\_ai](https://x.com/dify\_ai) +* Dify’s repo on GitHub:[https://github.com/langgenius/dify](https://github.com/langgenius/dify) diff --git a/ja-jp/conversion.log b/ja-jp/conversion.log new file mode 100644 index 00000000..890eb6a7 --- /dev/null +++ b/ja-jp/conversion.log @@ -0,0 +1,114 @@ +2025-03-21 16:56:19,260 - md-to-mdx - INFO - 处理文件: guides/application-orchestrate/agent.md +2025-03-21 16:56:19,266 - md-to-mdx - INFO - 转换完成: guides/application-orchestrate/agent.mdx +2025-03-21 16:56:19,268 - md-to-mdx - INFO - 已删除源文件: guides/application-orchestrate/agent.md +2025-03-21 16:56:19,268 - md-to-mdx - INFO - 处理文件: guides/application-orchestrate/chatbot-application.md +2025-03-21 16:56:19,271 - md-to-mdx - INFO - 转换完成: guides/application-orchestrate/chatbot-application.mdx +2025-03-21 16:56:19,272 - md-to-mdx - INFO - 已删除源文件: guides/application-orchestrate/chatbot-application.md +2025-03-21 16:56:19,272 - md-to-mdx - INFO - 处理文件: guides/application-orchestrate/creating-an-application.md +2025-03-21 16:56:19,273 - md-to-mdx - INFO - 转换完成: guides/application-orchestrate/creating-an-application.mdx +2025-03-21 16:56:19,273 - md-to-mdx - INFO - 已删除源文件: guides/application-orchestrate/creating-an-application.md +2025-03-21 16:56:19,273 - md-to-mdx - INFO - 处理文件: guides/application-orchestrate/README.md +2025-03-21 16:56:19,274 - md-to-mdx - INFO - 转换完成: guides/application-orchestrate/README.mdx +2025-03-21 16:56:19,274 - md-to-mdx - INFO - 已删除源文件: guides/application-orchestrate/README.md +2025-03-21 16:56:19,274 - md-to-mdx - INFO - 处理文件: guides/application-orchestrate/multiple-llms-debugging.md +2025-03-21 16:56:19,275 - md-to-mdx - INFO - 转换完成: guides/application-orchestrate/multiple-llms-debugging.mdx +2025-03-21 16:56:19,275 - md-to-mdx - INFO - 已删除源文件: guides/application-orchestrate/multiple-llms-debugging.md +2025-03-21 16:56:19,276 - md-to-mdx - INFO - 处理文件: guides/application-orchestrate/app-toolkits/README.md +2025-03-21 16:56:19,277 - md-to-mdx - INFO - 转换完成: guides/application-orchestrate/app-toolkits/README.mdx +2025-03-21 16:56:19,277 - md-to-mdx - INFO - 已删除源文件: guides/application-orchestrate/app-toolkits/README.md +2025-03-21 16:56:19,277 - md-to-mdx - INFO - 处理文件: guides/application-orchestrate/app-toolkits/moderation-tool.md +2025-03-21 16:56:19,278 - md-to-mdx - INFO - 转换完成: guides/application-orchestrate/app-toolkits/moderation-tool.mdx +2025-03-21 16:56:19,279 - md-to-mdx - INFO - 已删除源文件: guides/application-orchestrate/app-toolkits/moderation-tool.md +2025-03-21 17:27:24,239 - md-to-mdx - INFO - 处理文件: guides/workflow/error-handling/error-type.md +2025-03-21 17:27:24,246 - md-to-mdx - INFO - 转换完成: guides/workflow/error-handling/error-type.mdx +2025-03-21 17:27:24,247 - md-to-mdx - INFO - 已删除源文件: guides/workflow/error-handling/error-type.md +2025-03-21 17:27:24,247 - md-to-mdx - INFO - 处理文件: guides/workflow/error-handling/saretaerrojikku.md +2025-03-21 17:27:24,248 - md-to-mdx - INFO - 转换完成: guides/workflow/error-handling/saretaerrojikku.mdx +2025-03-21 17:27:24,249 - md-to-mdx - INFO - 已删除源文件: guides/workflow/error-handling/saretaerrojikku.md +2025-03-21 17:27:24,249 - md-to-mdx - INFO - 处理文件: guides/workflow/error-handling/README.md +2025-03-21 17:27:24,252 - md-to-mdx - INFO - 转换完成: guides/workflow/error-handling/README.mdx +2025-03-21 17:27:24,253 - md-to-mdx - INFO - 已删除源文件: guides/workflow/error-handling/README.md +2025-03-21 17:27:24,253 - md-to-mdx - INFO - 处理文件: guides/workflow/error-handling/predefined-nodes-failure-logic.md +2025-03-21 17:27:24,254 - md-to-mdx - INFO - 转换完成: guides/workflow/error-handling/predefined-nodes-failure-logic.mdx +2025-03-21 17:27:24,255 - md-to-mdx - INFO - 已删除源文件: guides/workflow/error-handling/predefined-nodes-failure-logic.md +2025-03-21 17:43:26,792 - md-to-mdx - INFO - 处理文件: guides/workflow/debug-and-preview/history.md +2025-03-21 17:43:26,794 - md-to-mdx - INFO - 转换完成: guides/workflow/debug-and-preview/history.mdx +2025-03-21 17:43:26,794 - md-to-mdx - INFO - 已删除源文件: guides/workflow/debug-and-preview/history.md +2025-03-21 17:43:26,794 - md-to-mdx - INFO - 处理文件: guides/workflow/debug-and-preview/README.md +2025-03-21 17:43:26,794 - md-to-mdx - INFO - 转换完成: guides/workflow/debug-and-preview/README.mdx +2025-03-21 17:43:26,794 - md-to-mdx - INFO - 已删除源文件: guides/workflow/debug-and-preview/README.md +2025-03-21 17:43:26,794 - md-to-mdx - INFO - 处理文件: guides/workflow/debug-and-preview/log.md +2025-03-21 17:43:26,794 - md-to-mdx - INFO - 转换完成: guides/workflow/debug-and-preview/log.mdx +2025-03-21 17:43:26,794 - md-to-mdx - INFO - 已删除源文件: guides/workflow/debug-and-preview/log.md +2025-03-21 17:43:26,794 - md-to-mdx - INFO - 处理文件: guides/workflow/debug-and-preview/checklist.md +2025-03-21 17:43:26,795 - md-to-mdx - INFO - 转换完成: guides/workflow/debug-and-preview/checklist.mdx +2025-03-21 17:43:26,795 - md-to-mdx - INFO - 已删除源文件: guides/workflow/debug-and-preview/checklist.md +2025-03-21 17:43:26,795 - md-to-mdx - INFO - 处理文件: guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md +2025-03-21 17:43:26,795 - md-to-mdx - INFO - 转换完成: guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.mdx +2025-03-21 17:43:26,795 - md-to-mdx - INFO - 已删除源文件: guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md +2025-03-21 17:43:26,795 - md-to-mdx - INFO - 处理文件: guides/workflow/debug-and-preview/step-run.md +2025-03-21 17:43:26,795 - md-to-mdx - INFO - 转换完成: guides/workflow/debug-and-preview/step-run.mdx +2025-03-21 17:43:26,795 - md-to-mdx - INFO - 已删除源文件: guides/workflow/debug-and-preview/step-run.md +2025-03-21 17:51:50,366 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/external-data-tool.md +2025-03-21 17:51:50,372 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/external-data-tool.mdx +2025-03-21 17:51:50,373 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/external-data-tool.md +2025-03-21 17:51:50,373 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/integrate-knowledge-within-application.md +2025-03-21 17:51:50,380 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/integrate-knowledge-within-application.mdx +2025-03-21 17:51:50,381 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/integrate-knowledge-within-application.md +2025-03-21 17:51:50,381 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/README.md +2025-03-21 17:51:50,382 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/README.mdx +2025-03-21 17:51:50,382 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/README.md +2025-03-21 17:51:50,382 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/metadata.md +2025-03-21 17:51:50,396 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/metadata.mdx +2025-03-21 17:51:50,396 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/metadata.md +2025-03-21 17:51:50,396 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/connect-external-knowledge-base.md +2025-03-21 17:51:50,398 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/connect-external-knowledge-base.mdx +2025-03-21 17:51:50,399 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/connect-external-knowledge-base.md +2025-03-21 17:51:50,399 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/knowledge-request-rate-limit.md +2025-03-21 17:51:50,399 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/knowledge-request-rate-limit.mdx +2025-03-21 17:51:50,399 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/knowledge-request-rate-limit.md +2025-03-21 17:51:50,399 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/external-knowledge-api-documentation.md +2025-03-21 17:51:50,400 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/external-knowledge-api-documentation.mdx +2025-03-21 17:51:50,400 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/external-knowledge-api-documentation.md +2025-03-21 17:51:50,400 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/retrieval-test-and-citation.md +2025-03-21 17:51:50,401 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/retrieval-test-and-citation.mdx +2025-03-21 17:51:50,401 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/retrieval-test-and-citation.md +2025-03-21 17:51:50,401 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.md +2025-03-21 17:51:50,404 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx +2025-03-21 17:51:50,404 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.md +2025-03-21 17:51:50,404 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/knowledge-and-documents-maintenance/README.md +2025-03-21 17:51:50,413 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/knowledge-and-documents-maintenance/README.mdx +2025-03-21 17:51:50,413 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/knowledge-and-documents-maintenance/README.md +2025-03-21 17:51:50,413 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md +2025-03-21 17:51:50,414 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.mdx +2025-03-21 17:51:50,414 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md +2025-03-21 17:51:50,414 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.md +2025-03-21 17:51:50,415 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.mdx +2025-03-21 17:51:50,416 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.md +2025-03-21 17:51:50,416 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.md +2025-03-21 17:51:50,416 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.mdx +2025-03-21 17:51:50,416 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.md +2025-03-21 17:51:50,416 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/README.md +2025-03-21 17:51:50,418 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/README.mdx +2025-03-21 17:51:50,418 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/README.md +2025-03-21 17:51:50,418 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/selecting-retrieval-settings.md +2025-03-21 17:51:50,419 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/selecting-retrieval-settings.mdx +2025-03-21 17:51:50,419 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/selecting-retrieval-settings.md +2025-03-21 17:51:50,419 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.md +2025-03-21 17:51:50,419 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.mdx +2025-03-21 17:51:50,419 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.md +2025-03-21 17:51:50,419 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/README.md +2025-03-21 17:51:50,419 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/README.mdx +2025-03-21 17:51:50,420 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/README.md +2025-03-21 17:51:50,420 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.md +2025-03-21 17:51:50,420 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.mdx +2025-03-21 17:51:50,420 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.md +2025-03-21 17:51:50,420 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-online-datasource/sync-from-website.md +2025-03-21 17:51:50,420 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/import-online-datasource/sync-from-website.mdx +2025-03-21 17:51:50,421 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-online-datasource/sync-from-website.md +2025-03-21 17:51:50,421 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-online-datasource/README.md +2025-03-21 17:51:50,421 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/import-online-datasource/README.mdx +2025-03-21 17:51:50,421 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-online-datasource/README.md +2025-03-21 17:51:50,421 - md-to-mdx - INFO - 处理文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-online-datasource/sync-from-notion.md +2025-03-21 17:51:50,422 - md-to-mdx - INFO - 转换完成: guides/knowledge-base/create-knowledge-and-upload-documents/import-online-datasource/sync-from-notion.mdx +2025-03-21 17:51:50,422 - md-to-mdx - INFO - 已删除源文件: guides/knowledge-base/create-knowledge-and-upload-documents/import-online-datasource/sync-from-notion.md diff --git a/ja-jp/getting-started/cloud.mdx b/ja-jp/getting-started/cloud.mdx new file mode 100644 index 00000000..75949c38 --- /dev/null +++ b/ja-jp/getting-started/cloud.mdx @@ -0,0 +1,25 @@ +--- +title: クラウドサービス +--- + + + +**ヒント:** Difyは現在ベータテストフェーズにあります。ドキュメントと製品に不一致がある場合は、製品の実際の体験を優先してください。 + + +Difyはすべてのユーザーに[クラウドサービス](http://cloud.dify.ai)を提供しており、自分でデプロイすることなくDifyの完全な機能を利用できます。Difyのクラウドサービスを利用するには、GitHubまたはGoogleアカウントが必要です。 + +1. [Difyクラウドサービス](https://cloud.dify.ai)にログインし、新しいワークスペースを作成するか、既存のワークスペースに参加します。 +2. モデルプロバイダーを設定するか、提供されているホスト型モデルプロバイダーを使用します。 +3. [アプリケーションを作成](../guides/application-orchestrate/creating-an-application.md)しましょう! + +### サブスクリプションプラン + +クラウドサービスには複数のサブスクリプションプランが用意されており、チームの状況に応じて選択できます。 + +* サンドボックス(無料版) +* プロフェッショナル版 +* チーム版 +* エンタープライズ版 + +各バージョンの価格設定については、[ここ](https://dify.ai/pricing)をご参照ください。 \ No newline at end of file diff --git a/ja-jp/getting-started/dify-premium.mdx b/ja-jp/getting-started/dify-premium.mdx new file mode 100644 index 00000000..af2a6cb6 --- /dev/null +++ b/ja-jp/getting-started/dify-premium.mdx @@ -0,0 +1,112 @@ +--- +title: Dify Premium +description: プレミアムPremium +--- + + +Dify Premiumは[AWS AMI](https://docs.aws.amazon.com/ja_jp/AWSEC2/latest/UserGuide/ec2-instances-and-amis.html)製品であります。これにより、ブランドのカスタマイズが可能で、AWS EC2にワンクリックで展開できます。[AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6)から購読し、次のようなシナリオに最適です: + +* 中小企業が1つ以上のアプリケーションをサーバーに構築し、データのプライバシーに関心がある場合。 +* [Dify Cloud](https://docs.dify.ai/v/ja-jp/getting-started/cloud)のサブスクリプションプランに関心があり、しかし、活用事例が[プラン](https://dify.ai/pricing)で提供されるリソースを超える場合。 +* Dify Enterpriseを組織内で導入する前に、POC検証を行いたい場合。 + +## セットアップ + +Difyを初めて使用する際には、管理者初期化パスワード(EC2インスタンスIDとして設定)を入力し、セットアッププロセスを開始してください。 + +AMIを展開した後は、EC2コンソールで見つかるインスタンスのパブリックIPを使用してDifyにアクセスします(デフォルトではHTTPポート80を使用します)。 + +## アップグレード + +EC2インスタンスで、次のコマンドを実行してください: + +``` +git clone https://github.com/langgenius/dify.git /tmp/dify +mv -f /tmp/dify/docker/* /dify/ +rm -rf /tmp/dify +docker-compose down +docker-compose pull +docker-compose -f docker-compose.yaml -f docker-compose.override.yaml up -d +``` + + + +アップグレードは以下の手順で行います: + +1. データのバックアップ +2. プラグインの移行 +3. メインプロジェクトのアップグレード + +### 1. データのバックアップ + +1.1 `cd` コマンドで Dify プロジェクトのパスに移動し、バックアップ用のブランチを作成します。 + +1.2 次のコマンドを実行して、docker-compose YAML ファイルをバックアップします(オプション)。 + +```bash +cd docker +cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak +``` + +1.3 サービスを停止するために以下のコマンドを実行し、Docker ディレクトリでデータバックアップを作成します。 + +```bash +docker compose down +tar -cvf volumes-$(date +%s).tgz volumes +``` + +### 2. バージョンアップ + +`v1.0.0` は Docker Compose を使用してデプロイできます。`cd` コマンドで Dify プロジェクトのパスに移動し、以下のコマンドで Dify のバージョンをアップグレードします: + +```bash +git checkout 1.0.0 # 1.0.0 ブランチに切り替える +cd docker +docker compose -f docker-compose.yaml up -d +``` + +### 3. ツールの移行をプラグインに変換 + +このステップでは、以前のコミュニティ版で使用していたツールやモデルプロバイダを自動的にデータ移行し、新しいバージョンのプラグイン環境にインストールします。 + +1. `docker ps` コマンドを実行して、docker-api コンテナの ID を確認します。 + +例: + +```bash +docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +417241cd**** nginx:latest "sh -c 'cp /docker-e…" 3 hours ago Up 3 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp docker-nginx-1 +f84aa773**** langgenius/dify-api:1.0.0 "/bin/bash /entrypoi…" 3 hours ago Up 3 hours 5001/tcp docker-worker-1 +a3cb19c2**** langgenius/dify-api:1.0.0 "/bin/bash /entrypoi…" 3 hours ago Up 3 hours 5001/tcp docker-api-1 +``` + +`docker exec -it a3cb19c2**** bash` コマンドを実行してコンテナのターミナルにアクセスし、以下を実行します: + +```bash +poetry run flask extract-plugins --workers=20 +``` + +> エラーが発生した場合は、サーバーに `poetry` 環境をインストールしてから実行してください。コマンド実行後、端末に入力待機のプロンプトが表示された場合は「Enter」を押して入力をスキップします。 + +このコマンドは、現在の環境で使用しているすべてのモデルとツールを抽出します。workers パラメータは並行プロセス数を決定し、必要に応じて調整できます。コマンドが終了すると、結果が保存される `plugins.jsonl` ファイルが生成されます。このファイルには、現在の Dify インスタンス内のすべてのワークスペースのプラグイン情報が含まれます。 + +インターネット接続が正常で、`https://marketplace.dify.ai` にアクセスできることを確認してください。`docker-api-1` コンテナ内で以下のコマンドを実行します: + +```bash +poetry run flask install-plugins --workers=2 +``` + +このコマンドは、最新のコミュニティ版に必要なすべてのプラグインをダウンロードしてインストールします。ターミナルに `Install plugins completed.` と表示されたら、移行は完了です。 + + + + +## カスタマイズ + +セルフホスト展開の場合と同様に、EC2インスタンス内の.envファイルの環境変数を必要に応じて変更することができます。その後、以下のコマンドを使用してDifyを再起動してください: + +```bash +docker-compose down +ocker-compose -f docker-compose.yaml -f docker-compose.override.yaml up -d +``` diff --git a/ja-jp/getting-started/install-self-hosted/README.mdx b/ja-jp/getting-started/install-self-hosted/README.mdx new file mode 100644 index 00000000..2feeb81e --- /dev/null +++ b/ja-jp/getting-started/install-self-hosted/README.mdx @@ -0,0 +1,17 @@ +--- +title: コミュニティ版 +--- + + +Dify コミュニティ版はオープンソース版で、以下の2つの方法のいずれかでデプロイできます: + +* [Docker Compose デプロイ](https://docs.dify.ai/v/ja-jp/getting-started/install-self-hosted/docker-compose) +* [ローカルソースコードで起動](https://docs.dify.ai/v/ja-jp/getting-started/install-self-hosted/local-source-code) + +GitHub で [Dify コミュニティ版](https://github.com/langgenius/dify) をご覧ください。 + +### コードの寄稿 + +正確な審査を確保するため、直接変更をコミットする権限を持つ寄稿者を含むすべてのコードの寄稿は、PR(プルリクエスト)を提出し、マージされる前にコア開発者の承認を得る必要があります。 + +私たちはすべての人のPRを歓迎します!ご協力いただける場合は、[寄稿ガイド](https://github.com/langgenius/dify/blob/main/CONTRIBUTING_JA.md) でプロジェクトに貢献する方法についての詳細を確認できます。 diff --git a/ja-jp/getting-started/install-self-hosted/bt-panel.mdx b/ja-jp/getting-started/install-self-hosted/bt-panel.mdx new file mode 100644 index 00000000..90c2b088 --- /dev/null +++ b/ja-jp/getting-started/install-self-hosted/bt-panel.mdx @@ -0,0 +1,73 @@ +--- +title: aaPanelでのデプロイ方法 +--- + + +## 前提条件 + +Difyをインストールする前に、以下の最低システム要件を満たしていることを確認してください: + +* CPU ≥ 2 Core +* RAM ≥ 4 GiB + + + + + + + + + + + + + + + + +
オペレーティングシステムソフトウェア説明
Linuxプラットフォーム +

aaPanel 7.0.11以降

+
aaPanelのインストール方法については、aaPanelインストールガイドを参照してください。
+## デプロイ手順 + +1. aaPanelにログインし、メニューバーで`Docker`をクリックします。 + +2. 初めて利用する場合は、`Docker`と`Docker Compose`サービスのインストールを促されるので、`install`をクリックしてください。既にインストールされている場合は、このステップをスキップしてください。 + +3. インストールが完了したら、`One-Click Install`から`Dify`を見つけて、`install`をクリックします。 + +4. ドメイン名やポートなどの基本情報を設定し、インストールを完了させます。 +> \[!重要] +> +> ドメイン名はオプションです。ドメイン名を入力した場合は、[Website] --> [Proxy Project]から管理できます。ドメイン名を設定した後は、[Allow external access]のチェックを入れる必要はありません。それ以外の場合は、ポートを介してアクセスする前にチェックを入れる必要があります。 + +5. インストールが完了したら、前のステップで設定したドメイン名またはIPアドレスとポートをブラウザで入力してアクセスします。 +- 名前(Name):アプリケーション名、デフォルトは`Dify-characters` +- バージョン選択(Version selection):デフォルトは`latest` +- ドメイン名(Domain name):ドメイン名でアクセスする必要がある場合は、ここでドメイン名を設定し、ドメイン名をサーバーに解決してください。 +- 外部アクセスを許可(Allow external access):`IP+ポート`を介して直接アクセスする必要がある場合は、チェックを入れてください。ドメイン名を設定している場合は、ここでチェックを入れないでください。 +- ポート(Port):デフォルトは`8088`であり、必要に応じて変更できます。 + +6. 提出後、パネルが自動的にアプリケーションを初期化し、約1〜3分かかります。初期化が完了したら、アクセスできるようになります。 + +### Difyにアクセスする + +管理者初期化ページにアクセスして、管理者アカウントを設定してください: + +```bash +# ドメインを設定した場合 +http://yourdomain/install + +# `IP+ポート`を介してアクセスする場合 +http://your_server_ip:8088/install +``` + +Difyウェブインターフェースのアドレス: + +```bash +# ドメインを設定した場合 +http://yourdomain/ + +# `IP+ポート`を介してアクセスする場合 +http://your_server_ip:8088/ +``` \ No newline at end of file diff --git a/ja-jp/getting-started/install-self-hosted/docker-compose.mdx b/ja-jp/getting-started/install-self-hosted/docker-compose.mdx new file mode 100644 index 00000000..6f3c1bf6 --- /dev/null +++ b/ja-jp/getting-started/install-self-hosted/docker-compose.mdx @@ -0,0 +1,172 @@ +--- +title: Docker Compose デプロイ +--- + + +### 前提条件 + +> Dify インストール前に, マシンが最小インストール要件を満たしていることを確認してください: +> +> * CPU >= 2 Core +> * RAM >= 4 GiB + + + + + + + + + + + + + + + + + + + + + + + + + + +
オペレーティング·システムソフトウェア説明
macOS 10.14またはそれ以降Docker DesktopDocker仮想マシン (VM) を少なくとも2つの仮想CPU (vCPU) と8 GBの初期メモリを使用するように設定してください。そうしないと、インストールが失敗する可能性があります。詳細についてはMacにDocker Desktopをインストールを参照してください。
Linuxプラットフォーム +

Docker 19.03以降

+

Docker Compose 1.28以降

+
詳細についてはDockerのインストールおよびDocker Composeのインストールを参照してください。
WSL 2を有効にしたWindows +

Docker Desktop

+

+
ソースコードやその他のデータをLinuxコンテナにバインドする際には、それらをWindowsファイルシステムではなくLinuxファイルシステムに保存することをお勧めします。詳細についてはWSL 2バックエンドを使用してWindowsにDocker Desktopをインストールを参照してください。
+### Difyのクローン + +Difyのソースコードをローカルにクローンします + +```bash +# 現在の最新バージョンは0.15.3だと仮定すると +git clone https://github.com/langgenius/dify.git --branch 0.15.3 +``` + +### Difyの起動 + +1. difyソースコードのdockerディレクトリに移動し、次のコマンドを実行してdifyを起動する: + + ```bash + cd dify/docker + ``` +2. 環境配置ファイルをコピーする + + ```bash + cp .env.example .env + ``` +3. Docker コンテナを起動する + + システムにインストールしたDocker Composeのバージョンをベースに,相応しい命令を使ってコンテナを起動します。 `$ docker compose version`を通してdockerのバージョンをチェックできます,詳しくは [Docker ドキュメント](https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command)を参考してください: + + * Docker Compose V2を使用する場合,以下の命令を入力する: + + ```bash + docker compose up -d + ``` + + * Docker Compose V1を使用する場合,以下の命令を入力する: + + ```bash + docker-compose up -d + ``` + +上記のコマンドを実行すると、すべてのコンテナの状態とポートマッピングを表示する以下のような出力が表示されるはずです: + +```Shell +[+] Running 11/11 + ✔ Network docker_ssrf_proxy_network Created 0.1s + ✔ Network docker_default Created 0.0s + ✔ Container docker-redis-1 Started 2.4s + ✔ Container docker-ssrf_proxy-1 Started 2.8s + ✔ Container docker-sandbox-1 Started 2.7s + ✔ Container docker-web-1 Started 2.7s + ✔ Container docker-weaviate-1 Started 2.4s + ✔ Container docker-db-1 Started 2.7s + ✔ Container docker-api-1 Started 6.5s + ✔ Container docker-worker-1 Started 6.4s + ✔ Container docker-nginx-1 Started 7.1s +``` + +最後に、すべてのコンテナが正常に稼働しているか確認: + +```bash +docker compose ps +``` + +これは3つのビジネスサービス `api / worker / web` と4つの基礎コンポーネント `weaviate / db / redis / nginx` を含まれます。 + +```bash +NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS +docker-api-1 langgenius/dify-api:0.3.2 "/entrypoint.sh" api 4 seconds ago Up 2 seconds 80/tcp, 5001/tcp +docker-db-1 postgres:15-alpine "docker-entrypoint.s…" db 4 seconds ago Up 2 seconds 0.0.0.0:5432->5432/tcp +docker-nginx-1 nginx:latest "/docker-entrypoint.…" nginx 4 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp +docker-redis-1 redis:6-alpine "docker-entrypoint.s…" redis 4 seconds ago Up 3 seconds 6379/tcp +docker-weaviate-1 semitechnologies/weaviate:1.18.4 "/bin/weaviate --hos…" weaviate 4 seconds ago Up 3 seconds +docker-web-1 langgenius/dify-web:0.3.2 "/entrypoint.sh" web 4 seconds ago Up 3 seconds 80/tcp, 3000/tcp +docker-worker-1 langgenius/dify-api:0.3.2 "/entrypoint.sh" worker 4 seconds ago Up 2 seconds 80/tcp, 5001/tcp +``` + +これらの手順を通して、Difyをローカルでインストールできます。 + +### Difyの更新 + +difyソースコードのdockerディレクトリに入り、以下のコマンドを順に実行: + +```bash +cd dify/docker +git pull origin main +docker compose down +docker compose pull +docker compose up -d +``` + +#### 環境変数設定の同期 (重要!) + +* `.env.example` ファイルが更新された場合は、必ずローカルの `.env` ファイルをそれに応じて修正してください。 +* `.env` ファイル内のすべての設定項目を確認し、実際の運用環境に合わせて修正してください。`.env.example` から `.env` ファイルに新しい変数を追加したり、変更された値を更新する必要があるかもしれません。 + +### Difyへのアクセス + +管理者初期化ページにアクセスして管理者アカウントを設定する: + +```bash +# ローカル環境 +http://localhost/install + +# サーバー環境 +http://your_server_ip/install +``` + +Dify web interface address: + +```bash +# ローカル環境 +http://localhost + +# サーバー環境 +http://your_server_ip +``` + +### Difyのカスタマイズ化 + +環境変数は docker/dotenvs にあります。もし変数を変更するには、対応する`.env.example` ファイル名の接尾辞 `.example` を削除し、ファイル中の変数を直接編集してください。その後、以下のコマンドを順に実行: + +``` +docker compose down +docker compose up -d +``` + +すべての環境変数は `docker/.env.example` にあります。 + +### もっと + +もし疑問があれば、[よくある質問](faq.md)をご覧ください。 diff --git a/ja-jp/getting-started/install-self-hosted/environments.mdx b/ja-jp/getting-started/install-self-hosted/environments.mdx new file mode 100644 index 00000000..1e706d80 --- /dev/null +++ b/ja-jp/getting-started/install-self-hosted/environments.mdx @@ -0,0 +1,647 @@ +--- +title: 環境変数の説明 +--- + + +### 公共変数 + +#### CONSOLE_API_URL + +コンソールAPIのバックエンドのURLです。認証コールバックを組み合わせるために使用され、空の場合は同じドメインになります。例:`https://api.console.dify.ai`。 + +#### CONSOLE_WEB_URL + +コンソールウェブの**フロントエンド**のURLです。フロントエンドアドレスの一部を組み合わせたり、CORS設定に使用されます。空の場合は同じドメインになります。例:`https://console.dify.ai` + +#### SERVICE_API_URL + +サービスAPIのURLです。**フロントエンド**にサービスAPIのベースURLを表示するために使用されます。空の場合は同じドメインになります。例:`https://api.dify.ai` + +#### APP_API_URL + +WebアプリAPIのバックエンドURLです。**フロントエンド**APIのバックエンドアドレスを宣言するために使用されます。空の場合は同じドメインになります。例:`https://app.dify.ai` + +#### APP_WEB_URL + +WebアプリのURLです。ファイルプレビューのためと**フロントエンド**にダウンロード用のURLを表示します、またはマルチモデル入力として使用されます。空の場合は、アプリと同じドメインになります。例:`https://udify.app/` + +#### FILES_URL + +ファイルプレビューまたはダウンロード用のURLプレフィックスです。ファイルプレビューやダウンロードURLをフロントエンドに表示したり、マルチモーダルモデルの入力として使用します。他人による偽造を防ぐため、画像プレビューURLは署名付きで、5分の有効期限があります。 + +*** + +### サーバー側 + +#### MODE + +起動モードです。dockerによる起動時にのみ有効で、ソースコード起動では無効です。 + +* api + + APIサーバーを起動します。 + +* worker + + 非同期キューのワーカーを起動します。 + +#### DEBUG + +デバッグモード。デフォルトはfalse。ローカル開発時にはこの設定をオンにすることを推奨します。これにより、モンキーパッチによって発生する問題を防ぐことができます。 + +#### FLASK_DEBUG + +Flaskのデバッグモード。オンにすると、インターフェースでトレース情報が出力され、デバッグが容易になります。 + +#### SECRET_KEY + +セッションクッキーを安全に署名し、データベース上の機密情報を暗号化するためのキー。 + +初回起動時にこの変数を設定する必要があります。 + +`openssl rand -base64 42`を使用して強力なキーを生成できます。 + +#### DEPLOY_ENV + +デプロイ環境。 + +* PRODUCTION(デフォルト) + + プロダクション環境。 +* TESTING + + テスト環境。フロントエンドページにはテスト環境を示す明確な色の識別が表示されます。 + +#### LOG_LEVEL + +ログ出力レベル。デフォルトはINFO。プロダクション環境ではERRORに設定することを推奨します。 + +#### MIGRATION_ENABLED + +trueに設定した場合、コンテナ起動時に自動的にデータベースのマイグレーションが実行されます。dockerによる起動時にのみ有効で、ソースコード起動では無効です。ソースコード起動の場合、apiディレクトリで手動で`flask db upgrade`を実行する必要があります。 + +#### CHECK_UPDATE_URL + +バージョンチェックポリシーを有効にするかどうか。falseに設定した場合、`https://updates.dify.ai`を呼び出してバージョンチェックを行いません。現在、国内から直接CloudFlare Workerのバージョンインターフェースにアクセスできないため、この変数を空に設定すると、このインターフェースの呼び出しをブロックできます。 + +#### TEXT_GENERATION_TIMEOUT_MS + +デフォルト値は60000 (milliseconds). 一部のプロセスが進行中にタイムアウトの原因で全部のサービスが利用できなくなるのを防ぐために、テキストの生成やワークフロー進行中にタイムアウトの時間を指定するために使用されます。 + +#### CSP_WHITELIST + +コンテンツセキュリティポリシー (CSP) ホワイトリスト。デフォルトでは有効になっていません。この変数に許可されるドメイン名のリストを入力すると、この変数が自動的に有効になり、潜在的な XSS 攻撃を減らすのに役立ちます。オンにすると、ホワイトリストには次のドメイン名が自動的に含まれます。 + +```url +*.sentry.io http://localhost:* http://127.0.0.1:* https://analytics.google.com https://googletagmanager.com https://api.github.com +``` + +#### コンテナ起動関連設定 + +dockerイメージまたはdocker-composeによる起動時にのみ有効です。 + +* DIFY_BIND_ADDRESS + + APIサービスのバインドアドレス。デフォルト:0.0.0.0、すべてのアドレスからアクセス可能にします。 + +* DIFY_PORT + + APIサービスのバインドポート番号。デフォルト5001。 + +* SERVER_WORKER_AMOUNT + + APIサービスのServer worker数。すなわちgevent workerの数。公式:`CPUのコア数 x 2 + 1`。 + + 詳細はこちら:https://docs.gunicorn.org/en/stable/design.html#how-many-workers + +* SERVER_WORKER_CLASS + + デフォルトはgevent。Windowsの場合、syncまたはsoloに切り替えることができます。 + +* GUNICORN_TIMEOUT + + リクエスト処理のタイムアウト時間。デフォルト200。360に設定することを推奨します。これにより、長時間のSSE接続をサポートできます。 + +* CELERY_WORKER_CLASS + + `SERVER_WORKER_CLASS`と同様に、デフォルトはgevent。Windowsの場合、syncまたはsoloに切り替えることができます。 + +* CELERY_WORKER_AMOUNT + + Celery workerの数。デフォルトは1。必要に応じて設定します。 + +* HTTP_PROXY + + HTTPプロキシのアドレス。国内からOpenAIやHuggingFaceにアクセスできない問題を解決するために使用されます。注意点:プロキシがホストマシンにデプロイされている場合(例:`http://127.0.0.1:7890`)、このプロキシアドレスはローカルモデルに接続する場合と同様に、dockerコンテナ内のホストマシンアドレスを使用する必要があります(例:`http://192.168.1.100:7890`または`http://172.17.0.1:7890`)。 + +* HTTPS_PROXY + + HTTPSプロキシのアドレス。国内からOpenAIやHuggingFaceにアクセスできない問題を解決するために使用されます。HTTPプロキシと同様に設定します。 + +#### データベース設定 + +データベースにはPostgreSQLを使用します。public schemaを使用してください。 + +* DB_USERNAME:ユーザー名 +* DB_PASSWORD:パスワード +* DB_HOST:データベースホスト +* DB_PORT:データベースポート番号。デフォルト5432 +* DB_DATABASE:データベース名 +* SQLALCHEMY_POOL_SIZE:データベース接続プールのサイズ。デフォルトは30接続。必要に応じて増やせます。 +* SQLALCHEMY_POOL_RECYCLE:データベース接続プールのリサイクル時間。デフォルト3600秒。 +* SQLALCHEMY_ECHO:SQLを出力するかどうか。デフォルトはfalse。 + +#### Redis 設定 + +このRedis設定はキャッシュおよび対話時のpub/subに使用されます。 + +* REDIS_HOST:Redisホスト +* REDIS_PORT:Redisポート。デフォルト6379 +* REDIS_DB:Redisデータベース。デフォルトは0。セッションRedisおよびCeleryブローカーとは異なるデータベースを使用してください。 +* REDIS_USERNAME:Redisユーザー名。デフォルトは空 +* REDIS_PASSWORD:Redisパスワード。デフォルトは空。パスワードを設定することを強く推奨します。 +* REDIS_USE_SSL:SSLプロトコルを使用して接続するかどうか。デフォルトはfalse +* REDIS_USE_SENTINEL:Redis Sentinelを使用してRedisサーバーに接続 +* REDIS_SENTINELS:Sentinelノード、フォーマット:`:,:,:` +* REDIS_SENTINEL_SERVICE_NAME:Sentinelサービス名、Master Nameと同じ +* REDIS_SENTINEL_USERNAME:Sentinelのユーザー名 +* REDIS_SENTINEL_PASSWORD:Sentinelのパスワード +* REDIS_SENTINEL_SOCKET_TIMEOUT:Sentinelのタイムアウト、デフォルト値:0.1、単位:秒 + + +#### Celery 設定 + +* CELERY_BROKER_URL + + フォーマットは以下の通りです(直接接続モード) + + ``` + redis://:@:/ + ``` + + 例:`redis://:difyai123456@redis:6379/1` + + Sentinelモード + + 例:`sentinel://localhost:26379/1;sentinel://localhost:26380/1;sentinel://localhost:26381/1` + +* BROKER_USE_SSL + + trueに設定した場合、SSLプロトコルを使用して接続します。デフォルトはfalse。 + +* CELERY_USE_SENTINEL + + trueに設定すると、Sentinelモードが有効になります。デフォルトはfalse + +* CELERY_SENTINEL_MASTER_NAME + + Sentinelのサービス名、すなわちMaster Name + +* CELERY_SENTINEL_SOCKET_TIMEOUT + + Sentinelへの接続タイムアウト、デフォルト値:0.1、単位:秒 + +#### CORS 設定 + +フロントエンドのクロスオリジンアクセスポリシーを設定するために使用します。 + +* CONSOLE_CORS_ALLOW_ORIGINS + + コンソールのCORSクロスオリジンポリシー。デフォルトは`*`、すべてのドメインがアクセス可能です。 +* WEB_API_CORS_ALLOW_ORIGINS + + WebアプリのCORSクロスオリジンポリシー。デフォルトは`*`、すべてのドメインがアクセス可能です。 + +詳細な設定については、次のガイドを参照してください:[クロスオリジン/認証関連ガイド](https://docs.dify.ai/v/ja-jp/learn-more/faq/install-faq#id-3-insutruniroguindekinaimataharoguinshitani401ergasareruha) + +#### ファイルストレージ設定 + +データセットのアップロードファイル、チーム/テナントの暗号化キーなどのファイルを保存するために使用します。 + +* STORAGE_TYPE + + ストレージのタイプ + + * local(デフォルト) + + ローカルファイルストレージ。この場合、以下の`STORAGE_LOCAL_PATH`を設定する必要があります。 + + * s3 + + S3オブジェクトストレージ。この場合、以下のS3_プレフィックスを設定する必要があります。 + + * azure-blob + + Azure Blobストレージ。この場合、以下のAZURE_BLOB_ プレフィックスを設定する必要があります。 + + * huawei-obs + + Huawei OBS オブジェクト ストレージ。このオプションが選択されている場合は、次の HUAWEI_OBS_ という接頭辞が付いた構成を設定する必要があります。 + +* STORAGE_LOCAL_PATH + + デフォルトはstorage、すなわち現在のディレクトリのstorageディレクトリに保存します。 + + dockerまたはdocker-composeでデプロイする場合、2つのコンテナにある`/app/api/storage`ディレクトリを同じローカルディレクトリにマウントする必要があります。そうしないと、ファイルが見つからないエラーが発生する可能性があります。 + +* S3_ENDPOINT:S3エンドポイントアドレス +* S3_BUCKET_NAME:S3バケット名 +* S3_ACCESS_KEY:S3アクセスキー +* S3_SECRET_KEY:S3シークレットキー +* S3_REGION:S3リージョン情報(例:us-east-1) +* AZURE_BLOB_ACCOUNT_NAME: アカウント名(例:'difyai') +* AZURE_BLOB_ACCOUNT_KEY: アカウントキー(例:'difyai') +* AZURE_BLOB_CONTAINER_NAME: コンテナ名(例:'difyai-container') +* AZURE_BLOB_ACCOUNT_URL: 'https://\.blob.core.windows.net' +* ALIYUN_OSS_BUCKET_NAME: your-bucket-name(例:'difyai') +* ALIYUN_OSS_ACCESS_KEY: your-access-key(例:'difyai') +* ALIYUN_OSS_SECRET_KEY: your-secret-key(例:'difyai') +* ALIYUN_OSS_ENDPOINT: https://oss-ap-southeast-1-internal.aliyuncs.com # reference: https://www.alibabacloud.com/help/en/oss/user-guide/regions-and-endpoints +* ALIYUN_OSS_REGION: ap-southeast-1 # reference: https://www.alibabacloud.com/help/en/oss/user-guide/regions-and-endpoints +* ALIYUN_OSS_AUTH_VERSION: v4 +* ALIYUN_OSS_PATH: your-path # Don't start with '/'. OSS doesn't support leading slash in object names. reference: https://www.alibabacloud.com/help/en/oss/support/0016-00000005 +* HUAWEI_OBS_BUCKET_NAME: your-bucket-name(例:'difyai') +* HUAWEI_OBS_SECRET_KEY: your-secret-key(例:'difyai') +* HUAWEI_OBS_ACCESS_KEY: your-access-key(例:'difyai') +* HUAWEI_OBS_SERVER: your-server-url # 参考文献: https://support.huaweicloud.com/sdk-python-devg-obs/obs_22_0500.html + +#### ベクトルデータベース設定 + +* VECTOR_STORE + + **使用可能な列挙型は以下を含みます:** + + * `weaviate` + * `qdrant` + * `milvus` + * `zilliz`(`milvus`と同じ) + * `pinecone`(現在未公開) + * `tidb_vector` + * `analyticdb` + * `couchbase` + * `oceanbase` + +* WEAVIATE_ENDPOINT + + Weaviateエンドポイントアドレス(例:`http://weaviate:8080`)。 + +* WEAVIATE_API_KEY + + Weaviateに接続するために使用するapi-keyの資格情報。 + +* WEAVIATE_BATCH_SIZE + + Weaviateでオブジェクトのバッチ作成数。デフォルトは100。詳細はこちらのドキュメントを参照してください:https://weaviate.io/developers/weaviate/manage-data/import#how-to-set-batch-parameters + +* WEAVIATE_GRPC_ENABLED + + Weaviateとの通信にgRPC方式を使用するかどうか。オンにすると性能が大幅に向上しますが、ローカルでは使用できない可能性があります。デフォルトはtrueです。 + +* QDRANT_URL + + Qdrantエンドポイントアドレス(例:`https://your-qdrant-cluster-url.qdrant.tech/`)。 + +* QDRANT_API_KEY + + Qdrantに接続するために使用するapi-keyの資格情報。 + +* PINECONE_API_KEY + + Pineconeに接続するために使用するapi-keyの資格情報。 + +* PINECONE_ENVIRONMENT + + Pineconeの環境(例:`us-east4-gcp`)。 + +* MILVUS_URI + + MilvusのURI設定。例:`http://host.docker.internal:19530`。[Zilliz Cloud](https://zilliz.com/jp/cloud)の場合は、URIとトークンを パブリックエンドポイントとAPIキーに調整してください。 + +* MILVUS_TOKEN + + MilvusのTOKEN設定。デフォルトは空。 + +* MILVUS_USER + + Milvusユーザーの設定。デフォルトは空。 + +* MILVUS_PASSWORD + + Milvusパスワードの設定。デフォルトは空。 + +* TIDB_VECTOR_HOST + + TiDB Vectorホスト設定(例:`xxx.eu-central-1.xxx.tidbcloud.com`) + +* TIDB_VECTOR_PORT + + TiDB Vectorポート番号設定(例:`4000`) + +* TIDB_VECTOR_USER + + TiDB Vectorユーザー設定(例:`xxxxxx.root`) + +* TIDB_VECTOR_PASSWORD + + TiDB Vectorパスワード設定 +* TIDB_VECTOR_DATABASE + + TiDB Vectorデータベース設定(例:`dify`) + +* ANALYTICDB_KEY_ID + + Aliyun OpenAPI認証に使用されるアクセスキーIDです。[ドキュメンテーション](https://help.aliyun.com/zh/analyticdb/analyticdb-for-postgresql/support/create-an-accesskey-pair) を参照してAccessKeyを作成します。 + +* ANALYTICDB_KEY_SECRET + + Aliyun OpenAPI認証に使用されるアクセスキーシークレットです。 + +* ANALYTICDB_INSTANCE_ID + + あなたのAnalyticDBインスタンスのユニークな識別子で、例えば `gp-xxxxxx` です。[ドキュメンテーション](https://help.aliyun.com/zh/analyticdb/analyticdb-for-postgresql/getting-started/create-an-instance-1) を参照してインスタンスを作成します。 + +* ANALYTICDB_REGION_ID + + AnalyticDBインスタンスが位置するリージョンの識別子で、例えば `cn-hangzhou` です。 + +* ANALYTICDB_ACCOUNT + + AnalyticDBインスタンスに接続するために使用するアカウント名です。[ドキュメンテーション](https://help.aliyun.com/zh/analyticdb/analyticdb-for-postgresql/getting-started/createa-a-privileged-account) を参照してアカウントを作成します。 + +* ANALYTICDB_PASSWORD + + AnalyticDBインスタンスに接続するために使用するアカウントのパスワードです。 + +* ANALYTICDB_NAMESPACE + + AnalyticDBインスタンス内で操作したいnamespace(schema)です。例えば `dify` です。このnamespaceが存在しない場合、自動的に作成されます。 + +* ANALYTICDB_NAMESPACE_PASSWORD + + namespace(schema)のパスワードです。このnamespaceが存在しない場合、このパスワードで作成されます。 + +* COUCHBASE_CONNECTION_STRING + + クラスターへのCouchbase接続文字列です。 + +* COUCHBASE_USER + + データベースユーザーのユーザー名です。 + +* COUCHBASE_PASSWORD + + データベースユーザーのパスワードです。 + +* COUCHBASE_BUCKET_NAME + + 使用するバケットの名前です。 + +* COUCHBASE_SCOPE_NAME + + 使用するスコープの名前です。 + +* OCEANBASE_VECTOR_HOST + + OceanBase Vector ホスト。 + +* OCEANBASE_VECTOR_PORT + + OceanBase Vector ポート。 + +* OCEANBASE_VECTOR_USER + + OceanBase Vector ユーザー名。 + +* OCEANBASE_VECTOR_PASSWORD + + OceanBase Vector パスワード。 + +* OCEANBASE_VECTOR_DATABASE + + OceanBase Vector データベース名。 + +* OCEANBASE_CLUSTER_NAME + + OceanBase クラスタ名,Docker デプロイメントのみ。 + +* OCEANBASE_MEMORY_LIMIT + + OceanBase メモリ使用上限,Docker デプロイメントのみ。 + +#### ナレッジベース設定 + +* UPLOAD_FILE_SIZE_LIMIT + + アップロードファイルのサイズ制限。デフォルトは15M。 + +* UPLOAD_FILE_BATCH_LIMIT + + 一度にアップロードできるファイル数の上限。デフォルトは5個。 + +* ETL_TYPE + + **使用可能な列挙型は以下を含みます:** + + * dify + + Dify独自のファイル抽出ソリューション + + * Unstructured + + Unstructured.ioのファイル抽出ソリューション + +* UNSTRUCTURED_API_URL + + ETL_TYPEがUnstructuredの場合、Unstructured APIパスの設定が必要です。 + + 例:`http://unstructured:8000/general/v0/general` + +#### マルチモーダルモデル設定 + +* MULTIMODAL_SEND_IMAGE_FORMAT + + マルチモーダルモデルの入力時に画像を送信する形式。デフォルトは`base64`、オプションで`url`。`url`モードでは呼び出しの遅延が`base64`モードよりも少なく、一般的には互換性が高い`base64`モードを推奨します。`url`に設定する場合、`FILES_URL`を外部からアクセス可能なアドレスに設定する必要があります。これにより、マルチモーダルモデルが画像にアクセスできるようになります。 + +* UPLOAD_IMAGE_FILE_SIZE_LIMIT + + アップロード画像ファイルのサイズ制限。デフォルトは10M。 + +#### Sentry 設定 + +アプリの監視およびエラーログトラッキングに使用されます。 + +* SENTRY_DSN + + Sentry DSNアドレス。デフォルトは空。空の場合、すべての監視情報はSentryに報告されません。 + +* SENTRY_TRACES_SAMPLE_RATE + + Sentryイベントの報告割合。例えば、0.01に設定すると1%となります。 + +* SENTRY_PROFILES_SAMPLE_RATE + + Sentryプロファイルの報告割合。例えば、0.01に設定すると1%となります。 + +#### Notion 統合設定 + +Notion統合設定。変数はNotion integrationを申請することで取得できます:[https://www.notion.so/my-integrations](https://www.notion.so/my-integrations) + +* NOTION_CLIENT_ID +* NOTION_CLIENT_SECRET + +#### メール関連の設定 + +* MAIL_TYPE + * resend + * MAIL_DEFAULT_SEND_FROM\ + 送信者のメール名,(例:no-reply [no-reply@dify.ai](mailto:no-reply@dify.ai))、必須ではありません。 + * RESEND_API_KEY\ + ResendメールプロバイダーのAPIキー。APIキーから取得できます。 + * smtp + * SMTP_SERVER\ + SMTPサーバーアドレス + * SMTP_PORT\ + SMTPサーバポートnumber + * SMTP_USERNAME\ + SMTP ユーザー名 + * SMTP_PASSWORD\ + SMTP パスワード + * SMTP_USE_TLS\ + TLSを使用するかどうか, デフォルトは false + * MAIL_DEFAULT_SEND_FROM\ + 送り人のメールアドレス, (例:no-reply [no-reply@dify.ai](mailto:no-reply@dify.ai))、必須ではありません。 + +#### モデルプロバイダ & ツールの位置の構成 + +アプリで使用できるモデルプロバイダーとツールを指定するために使用されます。これらの設定により、使用可能なツールとモデルプロバイダー、およびアプリのインターフェースでの順序と含める/除外をカスタマイズできます。 + +使用可能な[ツール](https://github.com/langgenius/dify/blob/main/api/core/tools/provider/_position.yaml) と [モデルプロバイダ](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/model_providers/_position.yaml)のリストについては、提供されているリンクを参照してください。 + +* POSITION_TOOL_PINS + + リストされたツールをリストの先頭に固定して、インターフェイスの先頭に確実に表示されるようにします。 (間に**スペースを入れず**にカンマ区切りの値を使用します。) + + 例: `POSITION_TOOL_PINS=bing,google` + +* POSITION_TOOL_INCLUDES + + アプリに含めるツールを指定します。ここにリストされているツールのみが使用可能です。設定されていない場合は、POSITION_TOOL_EXCLUDES で指定されていない限り、すべてのツールが含まれます。 (間に**スペースを入れず**にカンマ区切りの値を使用します。) + + 例: `POSITION_TOOL_INCLUDES=bing,google` + +* POSITION_TOOL_EXCLUDES + + 特定のツールをアプリでの表示または使用から除外します。ここにリストされているツールは、固定されていない限り、使用可能なオプションから除外されます。 (間に**スペースを入れず**にカンマ区切りの値を使用します。) + + 例: `POSITION_TOOL_EXCLUDES=yahoo,wolframalpha` + +* POSITION_PROVIDER_PINS + + リストされたモデルのサプライヤーをリストの先頭にピン留めして、インターフェイスの先頭に確実に表示されるようにします。 (間に**スペースを入れず**にカンマ区切りの値を使用します。) + + 例: `POSITION_PROVIDER_PINS=openai,openllm` + +* POSITION_PROVIDER_INCLUDES + + アプリに含めるモデルプロバイダーを指定します。ここにリストされているサプライヤーのみが利用可能です。設定されていない場合は、POSITION_PROVIDER_EXCLUDES で指定されていない限り、すべてのプロバイダーが含まれます。 (間に**スペースを入れず**にカンマ区切りの値を使用します。) + + 例: `POSITION_PROVIDER_INCLUDES=cohere,upstage` + +* POSITION_PROVIDER_EXCLUDES + + 特定のモデル ベンダーをアプリでの表示から除外します。ここにリストされているサプライヤーは、固定されない限り、利用可能なオプションから削除されます。 (間に**スペースを入れず**にカンマ区切りの値を使用します。) + + 例: `POSITION_PROVIDER_EXCLUDES=openrouter,ollama` + +#### 其他 + +* INVITE_EXPIRY_HOURS:メンバーを招待するのリンクの有効期間(時),デフォルト:72。 +* HTTP_REQUEST_NODE_MAX_TEXT_SIZE:ワークフロー内のHTTPリクエストノードの最大テキストサイズ、デフォルト1MB。 +* HTTP_REQUEST_NODE_MAX_BINARY_SIZE:ワークフロー内のHTTPリクエストノードの最大バイナリサイズ、デフォルト10MB。 + +*** + +### Web フロントエンド + +#### SENTRY_DSN + +Sentry DSN アドレス,デフォルトは空,空の場合、すべての監視情報は Sentry に報告されません。 + +## 廃棄されました + +#### CONSOLE_URL + +> ⚠️ この設定はバージョン 0.3.8 に改善し、0.4.9 から廃止されました,代わりは:`CONSOLE_API_URL` と `CONSOLE_WEB_URL` です。 + +コンソール URL です,認定コールバックやコンソールフロントエンドのアドレスの連結するため、および CORS の配置に使用されます、空の場合は同じドメインになります。例:`https://console.dify.ai`。 + +#### API_URL + +> ⚠️ この設定はバージョン 0.3.8 に改善し、0.4.9 から廃止されました,代わりは `SERVICE_API_URL`です。 + +API Url です,**フロントエンド** で使用して、サービス API ベース URL を表示します、空の場合は同じドメインになります。例:`https://api.dify.ai` + +#### APP_URL + +> ⚠️ この設定はバージョン 0.3.8 に改善し、0.4.9 から廃止されました,代わりは `APP_API_URL` と `APP_WEB_URL` です。 + +WebApp Url,**フロントエンド** API バックエンド アドレスを宣言するために使用されます、空の場合は同じドメインになります。例:`https://udify.app/` + +#### Session Configuration + +> ⚠️ この設定はバージョン 0.3.24 から廃止されました。 + +API サービスによってインターフェース ID 検証にのみ使用されます。 + +* SESSION_TYPE: セッションコンポーネントのタイプ + + * redis(デフォルト) + + これを選択した場合、下記の SESSION_REDIS_ で始まる環境変数を設定する必要があります。 + + * sqlalchemy + + これを選択した場合、現在のデータベース接続を使用し、sessions テーブルを使用してセッションレコードを読み書きします。 + +* SESSION_REDIS_HOST:Redis ホスト +* SESSION_REDIS_PORT:Redis ポート、デフォルトは 6379 +* SESSION_REDIS_DB:Redis データベース、デフォルトは 0、Redis および Celery ブローカーとは異なるデータベースを使用してください。 +* SESSION_REDIS_USERNAME:Redis ユーザー名、デフォルトは空 +* SESSION_REDIS_PASSWORD:Redis パスワード、デフォルトは空、パスワードの設定を強く推奨します。 +* SESSION_REDIS_USE_SSL:SSL プロトコルを使用して接続するかどうか、デフォルトは false + +#### クッキー戦略の設定 + +> ⚠️ この設定はバージョン 0.3.24 から廃止されました。 + +セッションクッキーのブラウザ戦略を設定するために使用されます。 + +* COOKIE_HTTPONLY + + クッキーの HttpOnly 設定、デフォルトは true。 + +* COOKIE_SAMESITE + + クッキーの SameSite 設定、デフォルトは Lax。 + +* COOKIE_SECURE + + クッキーの Secure 設定、デフォルトは false。 + +### 文書チャンク長の設定 + +#### MAXIMUM_CHUNK_TOKEN_LENGTH + +文書チャンク長の設定。長文処理時のテキストセグメントサイズを制御するために使用。デフォルト値:500。最大値:4000。 + +**大きなチャンク** +- 単一のチャンク内により多くの文脈を保持でき、複雑または文脈依存のタスクに適しています。 +- チャンク数が減少し、処理時間やストレージの負担が軽減されます。 + +**小さなチャンク** +- より細かい粒度を提供し、正確な情報抽出や要約タスクに適しています。 +- モデルのトークン制限を超えるリスクを低減し、制限の厳しいモデルに適応します。 + +**設定の推奨** +- 大きなチャンク: 文脈依存性が高いタスク(例: 感情分析や長文の要約)に適しています。 +- 小さなチャンク: 詳細な分析が必要なタスク(例: キーワード抽出や段落レベルの内容処理)に適しています。 diff --git a/ja-jp/getting-started/install-self-hosted/faq.mdx b/ja-jp/getting-started/install-self-hosted/faq.mdx new file mode 100644 index 00000000..0a357ef9 --- /dev/null +++ b/ja-jp/getting-started/install-self-hosted/faq.mdx @@ -0,0 +1,71 @@ +--- +title: よくある質問 +--- + + +### 1. パスワードリセットメールが長時間届かない場合はどうしたらいいですか? + +`.env`ファイルに`Mail`パラメータを設定する必要があることをご確認ください。メール設定に関する詳細は、「[環境変数の説明:メール関連の設定](https://docs.dify.ai/v/ja-jp/getting-started/install-self-hosted/environments#mru)」セクションをご参照ください。 + +設定の変更後は、以下のコマンドを実行して、サービスをリスタートさせてください。 + +```javascript +docker compose down +docker compose up -d +``` + +それでもまだメールが届かない場合は、メールサービスが正常に動作しているか、またメールがスパムフィルターに捕まっていないかをご確認ください。 + +### 2. ワークフローが複雑すぎてノードの上限を超えた場合、どう対処しますか? + +コミュニティ版では、`web/app/components/workflow/constants.ts`で手動で `MAX_TREE_DEPTH` の単一ブランチの深さの上限を調整できます。私たちのデフォルト値は50ですが、自分で拡張した場合、あまりにも深いブランチはパフォーマンスに影響を与える可能性があることに注意してください。 + +### 3. 各ワークフローノードのランタイムを指定するには? + +`.env` ファイル内の `TEXT_GENERATION_TIMEOUT_MS` 変数を変更することで、各ノードのランタイムを調整することができます。これにより、特定のプロセスがタイムアウトすることによるアプリケーション全体のサービス停止を防ぐことができます。 + +### 4. 管理者アカウントのパスワードをリセットする方法 + +Docker Composeを使ってデプロイしている場合、Docker Composeの実行中に以下のコマンドでパスワードをリセットすることができます: + +``` +docker exec -it docker-api-1 flask reset-password +``` + +メールアドレスと新しいパスワードを入力するプロンプトが表示されます。例えば: + +``` +dify@my-pc:~/hello/dify/docker$ docker compose up -d +[+] Running 9/9 + ✔ Container docker-web-1 Started 0.1s + ✔ Container docker-sandbox-1 Started 0.1s + ✔ Container docker-db-1 Started 0.1s + ✔ Container docker-redis-1 Started 0.1s + ✔ Container docker-weaviate-1 Started 0.1s + ✔ Container docker-ssrf_proxy-1 Started 0.1s + ✔ Container docker-api-1 Started 0.1s + ✔ Container docker-worker-1 Started 0.1s + ✔ Container docker-nginx-1 Started 0.1s +dify@my-pc:~/hello/dify/docker$ docker exec -it docker-api-1 flask reset-password +None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. +sagemaker.config INFO - Not applying SDK defaults from location: /etc/xdg/sagemaker/config.yaml +sagemaker.config INFO - Not applying SDK defaults from location: /root/.config/sagemaker/config.yaml +Email: hello@dify.ai +New password: newpassword4567 +Password confirm: newpassword4567 +Password reset successfully. +``` + +### 5. ポートの変更方法 + +Docker Compose を使用してデプロイする場合、`.env` 設定を変更することで Dify のアクセスポートをカスタマイズできます。 + +Nginx 関連の設定を変更する必要があります: + +``` +EXPOSE_NGINX_PORT=80 +EXPOSE_NGINX_SSL_PORT=443 +``` + + +他のデプロイに関する質問は[ここに](../../learn-more/faq/install-faq.md)います。 \ No newline at end of file diff --git a/ja-jp/getting-started/install-self-hosted/local-source-code.mdx b/ja-jp/getting-started/install-self-hosted/local-source-code.mdx new file mode 100644 index 00000000..bc9920de --- /dev/null +++ b/ja-jp/getting-started/install-self-hosted/local-source-code.mdx @@ -0,0 +1,263 @@ +--- +title: ローカルソースコード起動 +--- + + +### 前提条件 + +> Dify インストール前に, ぜひマシンが最小インストール要件を満たしていることを確認してください: +> - CPU >= 2 Core +> - RAM >= 4 GiB + + + + + + + + + + + + + + + + + + + + + + + + + + +
操作系统ソフトウェア説明
macOS 10.14またはそれ以降Docker DesktopDocker 仮想マシン(VM)を少なくとも2つの仮想CPU(vCPU)と8GBの初期メモリを使用するように設定してください。そうでないと、インストールが失敗する可能性があります。詳細はMacにDocker Desktopをインストールするを参照してください。
Linux プラットフォーム +

Docker 19.03またはそれ以降

+

Docker Compose 1.25.1またはそれ以降

+
詳細はDockerをインストールするおよびDocker Composeをインストールするを参照してください。
WSL 2が有効なWindowsDocker Desktopソースコードや他のデータをLinuxコンテナにバインドする際、WindowsファイルシステムではなくLinuxファイルシステムに保存することをお勧めします。詳細はWSL 2バックエンドを使用してWindowsにDocker Desktopをインストールするを参照してください。
+> OpenAI TTSを使用する場合、システムにFFmpegをインストールする必要があります。詳細は[リンク](https://docs.dify.ai/v/ja-jp/learn-more/faq/install-faq#id-15-tekisutomigeniopenai-error-ffmpeg-is-not-installedtoiuergashitano)を参照してください。 + +Dify コードをクローン: + +```Bash +git clone https://github.com/langgenius/dify.git +``` + +ビジネスサービスを有効にする前に、PostgreSQL / Redis / Weaviate(ローカルにない場合)をデプロイする必要があります。以下のコマンドで起動できます: + +```Bash +cd docker +docker compose -f docker-compose.middleware.yaml up -d +``` + +*** + +### サービスデプロイ + +* API インターフェースサービス +* Worker 非同期キュー消費サービス + +#### 基本環境インストール + +サーバーの起動にはPython 3.12 が必要です。Python環境の迅速なインストールには[pyenv](https://github.com/pyenv/pyenv)を使用することをお勧めします。 + +追加のPythonバージョンをインストールするには、pyenv installを使用します。 + +```Bash +pyenv install 3.12 +``` + +"3.12" の Python 環境に切り替えるには、次のコマンドを使用します。 + +```Bash +pyenv global 3.12 +``` + + +#### 起動手順 + +1. apiディレクトリに移動 + + ``` + cd api + ``` +> macOSの場合:`brew install libmagic`でlibmagicをインストールしてください。 + +2. 環境変数構成ファイルをコピー + + ``` + cp .env.example .env + ``` +3. ランダムキーを生成し、`.env`の`SECRET_KEY`の値を置き換え + + ``` + awk -v key="$(openssl rand -base64 42)" '/^SECRET_KEY=/ {sub(/=.*/, "=" key)} 1' .env > temp_env && mv temp_env .env + ``` +4. 依存関係をインストール + + Dify APIサービスは依存関係を管理するために[Poetry](https://python-poetry.org/docs/)を使用します。 + + ``` + poetry env use 3.12 + poetry install + ``` +5. データベース移行を実行 + + データベーススキーマを最新バージョンに更新します。 + + ``` + poetry run flask db upgrade + ``` +6. APIサービスを開始 + + ``` + poetry run flask run --host 0.0.0.0 --port=5001 --debug + ``` + + 正常な出力: + + ``` + * Debug mode: on + INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. + * Running on all addresses (0.0.0.0) + * Running on http://127.0.0.1:5001 + INFO:werkzeug:Press CTRL+C to quit + INFO:werkzeug: * Restarting with stat + WARNING:werkzeug: * Debugger is active! + INFO:werkzeug: * Debugger PIN: 695-801-919 + ``` +7. Workerサービスを開始 + + データセットファイルのインポートやデータセットドキュメントの更新などの非同期操作を消費するためのサービスです。Linux / MacOSでの起動: + + ``` + poetry run celery -A app.celery worker -P gevent -c 1 -Q dataset,generation,mail,ops_trace --loglevel INFO + ``` + + Windowsシステムでの起動の場合、以下のコマンドを使用してください: + + ``` + poetry run celery -A app.celery worker -P solo --without-gossip --without-mingle -Q dataset,generation,mail,ops_trace --loglevel INFO + ``` + + 正常な出力: + + ``` + -------------- celery@TAKATOST.lan v5.2.7 (dawn-chorus) + --- ***** ----- + -- ******* ---- macOS-10.16-x86_64-i386-64bit 2023-07-31 12:58:08 + - *** --- * --- + - ** ---------- [config] + - ** ---------- .> app: app:0x7fb568572a10 + - ** ---------- .> transport: redis://:**@localhost:6379/1 + - ** ---------- .> results: postgresql://postgres:**@localhost:5432/dify + - *** --- * --- .> concurrency: 1 (gevent) + -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) + --- ***** ----- + -------------- [queues] + .> dataset exchange=dataset(direct) key=dataset + .> generation exchange=generation(direct) key=generation + .> mail exchange=mail(direct) key=mail + + [tasks] + . tasks.add_document_to_index_task.add_document_to_index_task + . tasks.clean_dataset_task.clean_dataset_task + . tasks.clean_document_task.clean_document_task + . tasks.clean_notion_document_task.clean_notion_document_task + . tasks.create_segment_to_index_task.create_segment_to_index_task + . tasks.deal_dataset_vector_index_task.deal_dataset_vector_index_task + . tasks.document_indexing_sync_task.document_indexing_sync_task + . tasks.document_indexing_task.document_indexing_task + . tasks.document_indexing_update_task.document_indexing_update_task + . tasks.enable_segment_to_index_task.enable_segment_to_index_task + . tasks.generate_conversation_summary_task.generate_conversation_summary_task + . tasks.mail_invite_member_task.send_invite_member_mail_task + . tasks.remove_document_from_index_task.remove_document_from_index_task + . tasks.remove_segment_from_index_task.remove_segment_from_index_task + . tasks.update_segment_index_task.update_segment_index_task + . tasks.update_segment_keyword_index_task.update_segment_keyword_index_task + + [2023-07-31 12:58:08,831: INFO/MainProcess] Connected to redis://:**@localhost:6379/1 + [2023-07-31 12:58:08,840: INFO/MainProcess] mingle: searching for neighbors + [2023-07-31 12:58:09,873: INFO/MainProcess] mingle: all alone + [2023-07-31 12:58:09,886: INFO/MainProcess] pidbox: Connected to redis://:**@localhost:6379/1. + [2023-07-31 12:58:09,890: INFO/MainProcess] celery@TAKATOST.lan ready. + ``` + +*** + +### フロントエンドページデプロイ + +Web フロントエンドクライアントページサービス + +#### 基本環境インストール + +Web フロントエンドサービスを起動するには[Node.js v18.x (LTS)](http://nodejs.org)、[NPMバージョン8.x.x](https://www.npmjs.com/)または[Yarn](https://yarnpkg.com/)が必要です。 + +* NodeJS + NPMをインストール + +https://nodejs.org/en/download から対応するOSのv18.x以上のインストーラーをダウンロードしてインストールしてください。stableバージョンをお勧めします。NPMも同梱されています。 + +#### 起動手順 + +1. webディレクトリに移動 + + ``` + cd web + ``` +2. 依存関係をインストール + + ``` + npm install + ``` +3. 環境変数を構成。現在のディレクトリに `.env.local` ファイルを作成し、`.env.example` の内容をコピーします。必要に応じてこれらの環境変数の値を変更します。 + + ``` + # For production release, change this to PRODUCTION + NEXT_PUBLIC_DEPLOY_ENV=DEVELOPMENT + # The deployment edition, SELF_HOSTED + NEXT_PUBLIC_EDITION=SELF_HOSTED + # The base URL of console application, refers to the Console base URL of WEB service if console domain is + # different from api or web app domain. + # example: http://cloud.dify.ai/console/api + NEXT_PUBLIC_API_PREFIX=http://localhost:5001/console/api + # The URL for Web APP, refers to the Web App base URL of WEB service if web app domain is different from + # console or api domain. + # example: http://udify.app/api + NEXT_PUBLIC_PUBLIC_API_PREFIX=http://localhost:5001/api + + # SENTRY + NEXT_PUBLIC_SENTRY_DSN= + NEXT_PUBLIC_SENTRY_ORG= + NEXT_PUBLIC_SENTRY_PROJECT= + ``` +4. コードをビルド + + ``` + npm run build + ``` +5. webサービスを開始 + + ``` + npm run start + # または + yarn start + # または + pnpm start + ``` + +正常に起動すると、ターミナルに以下の情報が出力されます: + +``` +ready - started server on 0.0.0.0:3000, url: http://localhost:3000 +warn - You have enabled experimental feature (appDir) in next.config.js. +warn - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk. +info - Thank you for testing `appDir` please leave your feedback at https://nextjs.link/app-feedback +``` + +### Difyを訪問 + +最後に、http://127.0.0.1:3000 にアクセスすると、ローカルデプロイメントされたDifyを使用できます。 diff --git a/ja-jp/getting-started/install-self-hosted/start-the-frontend-docker-container.mdx b/ja-jp/getting-started/install-self-hosted/start-the-frontend-docker-container.mdx new file mode 100644 index 00000000..fe2ac1ff --- /dev/null +++ b/ja-jp/getting-started/install-self-hosted/start-the-frontend-docker-container.mdx @@ -0,0 +1,27 @@ +--- +title: 単独でフロントエンドのDockerコンテナを起動する +--- + + +バックエンドを単独で開発する際、ソースコードからバックエンドサービスを起動するだけで十分で、フロントエンドのコードをローカルで構築して起動する必要はないかもしれません。その代わり、Dockerイメージをプルしてコンテナを起動する方法でフロントエンドサービスを起動することができます。以下は具体的な手順です: + +#### DockerHubのイメージを直接使用する + +```Bash +docker run -it -p 3000:3000 -e CONSOLE_URL=http://127.0.0.1:5001 -e APP_URL=http://127.0.0.1:5001 langgenius/dify-web:latest +``` + +#### ソースコードからDockerイメージを構築する + +1. フロントエンドイメージを構築する + + ``` + cd web && docker build . -t dify-web + ``` +2. フロントエンドイメージを起動する + + ``` + docker run -it -p 3000:3000 -e CONSOLE_URL=http://127.0.0.1:5001 -e APP_URL=http://127.0.0.1:5001 dify-web + ``` +3. コンソールのドメイン名とWeb APPのドメイン名が一致しない場合、`CONSOLE_URL`と`APP_URL`を個別に設定できます。 +4. ローカルで [http://127.0.0.1:3000](http://127.0.0.1:3000) にアクセスします。 \ No newline at end of file diff --git a/ja-jp/getting-started/install-self-hosted/zeabur.mdx b/ja-jp/getting-started/install-self-hosted/zeabur.mdx new file mode 100644 index 00000000..65b628a1 --- /dev/null +++ b/ja-jp/getting-started/install-self-hosted/zeabur.mdx @@ -0,0 +1,34 @@ +--- +title: Zeabur に Dify をデプロイする +--- + + +[Zeabur](https://zeabur.com) は、ワンクリックデプロイで Dify をデプロイできるサービスデプロイプラットフォームです。本ガイドは、Zeabur に Dify をデプロイする方法を説明します。 + +## 前提条件 + +開始する前に、以下の事項が必要です: + +- Zeabur のアカウント。アカウントをお持ちでない場合は、[Zeabur](https://zeabur.com/) で無料のアカウントを登録できます。 +- Zeabur のアカウントを開発者プラン(月額 5 ドル)にアップグレードする必要があります。詳細は [Zeabur 定价](https://zeabur.com/pricing) をご覧ください。 + +## Dify を Zeabur にデプロイする + +Zeabur チームはワンクリックデプロイテンプレートを用意しています。以下のボタンをクリックするだけで開始できます: + +[![Deploy to Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/1D4DOW) + +ボタンをクリックすると、Zeabur 上のテンプレートページに移動し、デプロイの詳細情報と説明を確認できます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/getting-started/install-self-hosted/fc58c921d332857fb644d4c869162bb5.jpeg) + +デプロイボタンをクリックした後、生成されたドメイン名を入力し、そのドメイン名を Dify インスタンスにバインドし、他のサービスに環境変数として注入します。 +次に、お好みのリージョンを選択し、デプロイボタンをクリックすると、数分以内に Dify インスタンスがデプロイされます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/getting-started/install-self-hosted/43cdd302ca243f2a1d2475a857cc1e66.png) + +デプロイが完了すると、Zeabur コンソール上にプロジェクトページが表示されます。以下の図のように、デプロイ中に入力したドメイン名が自動的に NGINX サービスにバインドされ、そのドメイン名を使用して Dify インスタンスにアクセスできます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/getting-started/install-self-hosted/7aa82d91ab9e798245b9df18221637b2.png) + +また、NGINX サービスページのネットワーキングタブでドメイン名を変更することもできます。詳細については [Zeabur ドキュメント](https://zeabur.com/docs/deploy/domain-binding) を参照してください。 \ No newline at end of file diff --git a/ja-jp/getting-started/readme/features-and-specifications.mdx b/ja-jp/getting-started/readme/features-and-specifications.mdx new file mode 100644 index 00000000..c1c642da --- /dev/null +++ b/ja-jp/getting-started/readme/features-and-specifications.mdx @@ -0,0 +1,215 @@ +--- +title: 特徴と技術仕様 +description: LLMアプリケーションの技術スタックに精通している方々にとって、このドキュメントはDify独自の強みを理解するための近道となります。これにより、的確な比較と選択が可能になり、同僚や友人への推奨もしやすくなるでしょう。 +--- + + + +Difyでは、製品の仕様に関する透明性の高いポリシーを採用しています。これにより、製品を十分に理解した上で意思決定を行うことができます。この透明性は、技術選定に役立つだけでなく、コミュニティのメンバーが製品をより深く理解し、積極的に貢献することを促進します。 + + +### プロジェクトの基本情報 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
プロジェクト設立2023年3月
オープンソースライセンスApache License 2.0(商用ライセンスあり)
公式開発チーム15名以上のフルタイム従業員
コミュニティ貢献者290人以上(2024年Q2時点)
バックエンド技術Python / Flask / PostgreSQL
フロントエンド技術Next.js
コードベースサイズ13万行以上
リリース頻度平均週1回
+ +### 技術特徴 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LLM推論エンジンDify Runtime(v0.4以降、LangChainを除去)
商用モデル対応 +

10社以上(OpenAIとAnthropicを含む)

+

新しい主要モデルは通常48時間以内に対応

+
MaaSベンダー対応7社(Hugging Face、Replicate、AWS Bedrock、NVIDIA、GroqCloud、together.ai、OpenRouter)
ローカルモデル対応6(Xoribits[推奨]、OpenLLM、LocalAI、ChatGLM、Ollama、NVIDIA TIS)
OpenAIインターフェース標準モデル統合
マルチモーダル機能 +

音声認識(ASR)モデル

+

GPT-4o水準のリッチテキストモデル

+
内製アプリタイプ +

チャットボット、チャットフロー、テキスト生成、エージェント、ワークフロー

+
Prompt-as-a-Serviceオーケストレーション +

高評価のビジュアルオーケストレーションインターフェース、プロンプトの編集と効果のプレビューを一箇所で実行可能

+

オーケストレーションモード

+
    +
  • シンプルオーケストレーション
  • +
  • アシスタントオーケストレーション
  • +
  • フローオーケストレーション
  • +
+

プロンプト変数タイプ

+
    +
  • 文字列
  • +
  • ラジオボタン列挙型
  • +
  • 外部API
  • +
  • ファイル(2024年Q3にリリース予定)
  • +
+
エージェント型ワークフロー機能 +

業界をリードするビジュアルワークフローオーケストレーションインターフェース、ノードデバッグはライブ編集可能、モジュール式DSL、ネイティブコードランタイムを提供。より複雑で信頼性が高く安定したLLMアプリケーションの構築に対応

+

利用可能なノード

+
    +
  • LLM
  • +
  • 知識取得
  • +
  • 質問分類器
  • +
  • 条件分岐
  • +
  • コード実行
  • +
  • テンプレート
  • +
  • HTTPリクエスト
  • +
  • ツール
  • +
+
RAG機能 +

ビジュアル化された画期的なナレッジベース管理インターフェースを提供。チャンクのプレビューやリコールテストをサポート

インデックス方式

+
    +
  • キーワード
  • +
  • テキストベクトル
  • +
  • LLMによるQ&Aセグメント化
  • +
+

検索方式

+
    +
  • キーワード
  • +
  • テキスト類似度マッチング
  • +
  • ハイブリッド検索
  • +
  • N選択1(レガシー)
  • +
  • マルチパス探索
  • +
+

回答精度の最適化

+
    +
  • ReRankモデルを使用
  • +
+
ETL技術 +

TXT、MARKDOWN、PDF、HTML、XLSX、XLS、DOCX、CSV形式の自動的クリーニングをサポート。Unstructuredのサービスによる最大限のサポートを実現

+
    +
  • Notionのドキュメントをナレッジベースとして同期可能
  • +
  • ウェブページをナレッジベースとして同期可能
  • +
+
対応ベクトルデータベースQdrant(推奨)、Weaviate、Zilliz/Milvus、Pgvector、Pgvector-rs、Chroma、OpenSearch、TiDB、Tencent Vector、Oracle、Relyt、Analyticdb, Couchbase, OceanBase
エージェント技術 +

ReAct、Function Call

+

ツールサポート

+
    +
  • OpenAIプラグイン標準のツールを呼び出し可能
  • +
  • OpenAPI Specification APIを直接ツールとしてロード可能
  • +
+

内蔵ツール

+
    +
  • 40種類以上(2024年Q2時点)
  • +
+
ログ機能あり、ログに基づくアノテーション
アノテーション返答人間がアノテーションしたQ&Aペアに基づく類似度ベースの返答
モデルのファインチューニング用データ形式としてエクスポート可能
コンテンツモデレーションOpenAI Moderationまたは外部API
チームコラボレーションワークスペース、複数メンバー管理
API仕様RESTful、ほとんどの機能をカバー
デプロイ方法Docker、Helm
diff --git a/ja-jp/getting-started/readme/model-providers.mdx b/ja-jp/getting-started/readme/model-providers.mdx new file mode 100644 index 00000000..6bd2ba78 --- /dev/null +++ b/ja-jp/getting-started/readme/model-providers.mdx @@ -0,0 +1,384 @@ +--- +title: モデルプロバイダーリスト +--- + +Difyは以下のモデルプロバイダーをサポートしています: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
プロバイダーLLMテキスト埋め込み再ランク付け音声認識音声合成
OpenAI✔️(🛠️)(👓)✔️✔️✔️
Anthropic✔️(🛠️)
Azure OpenAI✔️(🛠️)(👓)✔️✔️✔️
Gemini✔️
Google Cloud✔️(👓)✔️
Nvidia API Catalog✔️✔️✔️
Nvidia NIM✔️
Nvidia Triton Inference Server✔️
AWS Bedrock✔️✔️
OpenRouter✔️
Cohere✔️✔️✔️
together.ai✔️
Ollama✔️✔️
Mistral AI✔️
groqcloud✔️
Replicate✔️✔️
Hugging Face✔️✔️
Xorbits inference✔️✔️✔️✔️✔️
智谱(ズーパ)✔️(🛠️)(👓)✔️
百川(バイチュアン)✔️✔️
讯飞星火(イフライシンカ)✔️
Minimax✔️(🛠️)✔️
通義千問(トンイーチェンウェン)✔️✔️✔️
文心一言(ブンシンイチゲン)✔️✔️
月之暗面(ゲツノアンメン)✔️(🛠️)
Tencent Cloud✔️
階躍星辰(カイヤクセイシン)✔️(🛠️)(👓)
火山エンジン✔️✔️
零一万物(レイイチバンブツ)✔️
360 智脑✔️
Azure AI Studio✔️✔️
deepseek✔️(🛠️)
騰訊混元(テンセンコンゲン)✔️
SILICONFLOW✔️✔️
Jina AI✔️✔️
ChatGLM✔️
Xinference✔️(🛠️)(👓)✔️✔️
OpenLLM✔️✔️
LocalAI✔️✔️✔️✔️
OpenAI API互換✔️✔️✔️
PerfXCloud✔️✔️
Lepton AI✔️
novita.ai✔️
Amazon Sagemaker✔️✔️✔️
Text Embedding Inference✔️✔️
+ +その中で (🛠️) は関数呼び出しをサポートすることを、(👓) は視覚能力を持つことを示します。 + +この表は常に更新されています。また、コミュニティメンバーからのモデル供給者に関する様々な[リクエスト](https://github.com/langgenius/dify/discussions/categories/ideas)も注視しています。必要なモデル供給者がこのリストにない場合は、プルリクエストを提出して貢献することができます。詳しくは、[contribution](../../community/contribution.md)ガイドをご覧ください。 diff --git a/ja-jp/guides/annotation/README.mdx b/ja-jp/guides/annotation/README.mdx new file mode 100644 index 00000000..ef3bae4a --- /dev/null +++ b/ja-jp/guides/annotation/README.mdx @@ -0,0 +1,3 @@ +--- +title: アノテーション +--- diff --git a/ja-jp/guides/annotation/annotation-reply.mdx b/ja-jp/guides/annotation/annotation-reply.mdx new file mode 100644 index 00000000..e0281738 --- /dev/null +++ b/ja-jp/guides/annotation/annotation-reply.mdx @@ -0,0 +1,87 @@ +--- +title: アノテーションリプライ +--- + + +アノテーションリプライ機能は、手動編集されたアノテーションを通じて、アプリケーションにカスタマイズされた高品質なQ&Aリプライ能力を提供します。 + +適用シーン: + +* **特定分野のカスタマイズ回答:** 企業や政府などのカスタマーサービスやナレッジベースQ&Aのシーンで、特定の問題に対してシステムが明確な結果を提供することを望む場合、特定の問題についてカスタマイズされた出力結果が必要です。例えば、特定の問題に対する「標準回答」や、特定の問題に対する「回答不可」の設定などが挙げられます。 +* **POCやデモ製品の迅速なチューニング:** プロトタイプ製品を迅速に構築する際に、アノテーションリプライを通じて実現されたカスタマイズ回答は、Q&Aの生成結果の予期を効率的に向上させ、顧客満足度を高めることができます。 + +アノテーションリプライ機能は、LLM (大規模言語モデル) の生成プロセスをスキップし、RAG (Retrieval-Augmented Generation) の生成幻覚問題を回避するための別の検索強化システムを提供するものです。 + +### 使用プロセス + +1. アノテーションリプライ機能を有効にすると、LLMの対話リプライ内容に対してアノテーションを行うことができます。LLMがリプライした高品質な回答を直接アノテーションとして追加することもできますし、必要に応じて高品質な回答を手動で編集することもできます。これらの編集されたアノテーション内容は永続化保存されます。 +2. ユーザーが再度類似の質問をした場合、その質問をベクトル化し、類似するアノテーションをクエリします。 +3. マッチング項目が見つかった場合、そのアノテーションに対応する回答を直接返し、LLMやRAGのプロセスを経由せずにリプライします。 +4. マッチング項目が見つからなかった場合、質問は通常プロセスを継続します(LLMやRAGに渡されます)。 +5. アノテーションリプライ機能をオフにすると、システムはアノテーションからのマッチングリプライを継続して行いません。 + +![アノテーションリプライの流れ](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/7bebcf85d52f65d5649956f47ed33d43.png) + +### 提示詞編成でアノテーションリプライを有効にする + +「アプリケーション構築->機能追加」からアノテーションリプライのスイッチを有効にします: + +![提示詞編成でアノテーションリプライを有効にする](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/b467da1fbaa9beb22cfb2a987f51f653.png) + +有効にする際には、まずアノテーションリプライのパラメータを設定する必要があります。設定可能なパラメータには次のものがあります:スコア閾値と埋め込みモデル + +**スコア閾値**:アノテーションリプライのマッチング類似度閾値を設定するために使用されます。閾値スコアを超えるアノテーションのみがリコールされます。 + +**埋め込みモデル**:アノテーションテキストをベクトル化するために使用され、モデルの切り替え時には再度埋め込みが生成されます。 + +保存して有効にすると、この設定は直ちに有効となり、システムはすべての保存されたアノテーションに対して埋め込みモデルを利用して埋め込みを生成し保存します。 + +![アノテーションリプライのパラメータ設定](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/a2c7b82a4f25a96fcdf68c807fb96812.png) + +### 会話デバッグページでアノテーションを追加する + +デバッグおよびプレビューページでモデルのリプライ情報に直接アノテーションを追加または編集できます。 + +![アノテーションリプライを追加する](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/e064e3dcca3f04e16f5269b169820d2d.png) + +必要な高品質リプライに編集して保存します。 + +![アノテーションリプライを編集する](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/b79aabe6e9b336e26ca409a49526501e.png) + +同じユーザー質問を再度入力すると、システムは既に保存されたアノテーションを使用してユーザー質問に直接リプライします。 + +![保存されたアノテーションを通じてユーザー質問にリプライする](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/810f640d184227f4918ee197ff906203.png) + +### ログとアノテーションでアノテーションリプライを有効にする + +「アプリケーション構築->ログとアノテーション->アノテーション」からアノテーションリプライのスイッチを有効にします: + +![ログとアノテーションでアノテーションリプライを有効にする](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/c74951765f078392924da901008eb815.png) + +### アノテーションバックエンドでアノテーションリプライのパラメータを設定する + +アノテーションリプライで設定可能なパラメータには次のものがあります:スコア閾値と埋め込みモデル + +**スコア閾値**:アノテーションリプライのマッチング類似度閾値を設定するために使用されます。閾値スコアを超えるアノテーションのみがリコールされます。 + +**埋め込みモデル**:アノテーションテキストをベクトル化するために使用され、モデルの切り替え時には再度埋め込みが生成されます。 + +![アノテーションリプライのパラメータを設定する](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/5bbd94402452e3f4ecc29eb398591585.png) + +### アノテーションQ&Aペアを一括インポートする + +一括インポート機能内で、アノテーションインポートテンプレートをダウンロードし、テンプレート形式に従ってアノテーションQ&Aペアを編集します。編集が完了したら、一括インポートします。 + +![アノテーションQ&Aペアを一括インポートする](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/ad6497dbe8c93fe9988cf76775434a7c.png) + +### アノテーションQ&Aペアを一括エクスポートする + +アノテーション一括エクスポート機能を通じて、システム内に保存されたすべてのアノテーションQ&Aペアを一度にエクスポートできます。 + +![アノテーションQ&Aペアを一括エクスポートする](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/4d80d0a9b8056711a2dcdf664c19e840.png) + +### アノテーションリプライのヒット履歴を確認する + +アノテーションヒット履歴機能内で、すべてのヒットしたアノテーションの編集履歴、ヒットしたユーザー質問、リプライ回答、ヒットソース、マッチング類似度スコア、ヒット時間などの情報を確認できます。これらのシステム情報に基づいて、アノテーション内容を継続的に改善することができます。 + +![アノテーションリプライのヒット履歴を確認する](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/26b6c37dcff225201ea5b4fb712b2d4d.png) \ No newline at end of file diff --git a/ja-jp/guides/annotation/logs.mdx b/ja-jp/guides/annotation/logs.mdx new file mode 100644 index 00000000..b0cf7805 --- /dev/null +++ b/ja-jp/guides/annotation/logs.mdx @@ -0,0 +1,38 @@ +--- +title: ログとアナウンス +--- + + + +アプリがユーザーデータを収集する際には、必ず現地の法規を遵守してください。一般的な方法としては、プライバシーポリシーを公開し、ユーザーの同意を得ることです。 + + +ログ(Logs)機能は、Dify アプリの動作を観察およびマークするためのものです。Dify はアプリのすべてのインタラクションプロセスを記録し、ウェブアプリ(WebApp)や API を通じて呼び出される場合でも、プロンプトエンジニアや LLM 運用担当者に視覚的な LLM アプリ運営体験を提供します。 + +### ログコンソールの使用 + +アプリの左側ナビゲーションに**ログ(Logs)**を見つけることができ、このページは通常以下を表示します: + +* 選択した時間内のユーザーおよびユーザーのインタラクション記録 +* ユーザー入力と AI 出力の結果。対話型アプリの場合、通常は一連のメッセージフロー +* ユーザーおよび運用担当者の評価、ならびに運用担当者の改良マーク + +注意点:チーム内の複数の協力者が同じログをマークすると、最後のマークが以前のマークを上書きします。 + +> 無料プランのチームでは、インタラクションログは過去30日間のみ保存されます。より長期間のインタラクション履歴を保存したい場合は、[価格ページ](https://dify.ai/pricing)を訪れて、上位プランにアップグレードするか、[コミュニティエディション](../../getting-started/install-self-hosted/docker-compose)を展開することを検討してください。 + +### 改良アナウンス + + +これらのマークは、Dify の後続バージョンでモデルの微調整(モデルの微調整)に使用され、モデルの正確性と返信スタイルを向上させることを目的としています。現在のプレビュー版では、マークのみがサポートされています。 + + +![ログをマークして改良](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/annotation/a17d81cd50a6788df3bf6853cc963d0b.png) + +ログをクリックすると、画面の右側にログ詳細パネルが開き、このパネルで運用担当者はインタラクションに対してマークを追加できます: + +* 良いパフォーマンスのメッセージに「いいね」 +* 悪いパフォーマンスのメッセージに「よくないね」 +* 改善結果に対して改良返信をマーク。これはあなたが期待する AI の返信テキストです + +注意点:チーム内の複数の管理者が同じログをマークすると、最後のマークが以前のマークを上書きします。 \ No newline at end of file diff --git a/ja-jp/guides/application-orchestrate/README.mdx b/ja-jp/guides/application-orchestrate/README.mdx new file mode 100644 index 00000000..37519dd4 --- /dev/null +++ b/ja-jp/guides/application-orchestrate/README.mdx @@ -0,0 +1,30 @@ +--- +title: アプリ・オーケストレーション +--- + + +Difyにおいて、「アプリ・オーケストレーション」とは、GPTなどの大規模言語モデルを基に構築された実際のシナリオアプリケーションを指します。アプリケーションを作成することで、特定のニーズに応じたスマートAI技術を適用することができます。これは、AIアプリケーションの開発のためのエンジニアリングパラダイムと具体的なデリバラブルの両方を含んでいます。 + +要するに、アプリケーションは開発者に以下を提供します: + +* トークン認証を通じて、バックエンドまたはフロントエンドアプリケーションから直接呼び出せる、使いやすいAPI +* すぐに使える、美しくホスティングされたWebApp。WebAppのテンプレートを使用して二次開発が可能 +* 提示詞エンジニアリング、コンテキスト管理、ログ分析、および注釈を含む使いやすいインターフェースのセット + +これらの**いずれか一つ**または**すべて**を選んで、あなたのAIアプリケーション開発をサポートできます。 + +### アプリケーションタイプ + +Difyには、五つのアプリケーションタイプが提供されています: + +* **チャットボット**:LLMを基にした対話型インタラクションアシスタント。 +* **テキスト ジェネレーター**:ストーリーの執筆、テキスト分類、翻訳などのテキスト生成タスク向けのアシスタント。 +* **エージェント**:タスクを分解し、推論し、ツールを呼び出す対話型インテリジェントアシスタント。 +* **チャットフロー**:メモリ機能を備えたマルチラウンドの複雑な対話タスクのワークフローオーケストレーション。 +* **ワークフロー**:自動化やバッチ処理などの単一ラウンドのタスクのためのワークフローオーケストレーション。 + +テキスト ジェネレーターアプリケーションとチャットボットアシスタントの違いは以下の表をご覧ください: + +
テキスト ジェネレーターチャットボット
Webアプリインターフェースフォーム+結果式チャット式
WebAPIエンドポイントcompletion-messageschat-messages
インタラクション方式一問一答対話型のやりとり
ストリーミング結果返却対応対応
コンテキスト保存セッション内のみ継続的に保存
ユーザー入力フォーム対応対応
データセットとプラグイン対応対応
AIオープニング不対応対応
シナリオ例翻訳、判断、インデックス付けチャット
+ +### \ No newline at end of file diff --git a/ja-jp/guides/application-orchestrate/agent.mdx b/ja-jp/guides/application-orchestrate/agent.mdx new file mode 100644 index 00000000..51be95ce --- /dev/null +++ b/ja-jp/guides/application-orchestrate/agent.mdx @@ -0,0 +1,81 @@ +--- +title: エージェント +--- + + +### 定義 + +エージェントアシスタントは、大規模言語モデルの推論能力を活用し、複雑な人間のタスクを自律的に目標設定、タスク分解、ツールの呼び出し、プロセスのイテレーションを行い、人間の介入なしでタスクを完了することができます。 + +### エージェントアシスタントの使い方 + +迅速に使い始めるために、「探索」でエージェントアシスタントのアプリケーションテンプレートを見つけて自分のワークスペースに追加するか、それを基にカスタマイズすることができます。新しいDifyスタジオでは、ゼロから自分専用のエージェントアシスタントを編成し、財務報告書の分析、レポートの作成、ロゴデザイン、旅行計画などのタスクを完了する手助けをすることができます。 + +
+ + +
+ +### 会話のオープニング設定 + +エージェントアシスタントの会話オープニングとオープニング質問を設定できます。設定された会話オープニングは、ユーザーが初めて対話を開始する際に、アシスタントが完了できるタスクや提案される質問の例を表示します。 + +![会話のオープニングとオープニング質問を設定](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/df2f6f23bc2aa6ff65e6560ad45dd94a.png) + +### ファイルのアップロード + +Claude 3.5 Sonnet (https://docs.anthropic.com/en/docs/build-with-claude/pdf-support) や Gemini 1.5 Pro (https://ai.google.dev/api/files) など、一部のLLMはファイル処理に標準対応しています。各LLMのウェブサイトで、ファイルのアップロード機能について詳しくご確認ください。 + +ファイルの読み込みに対応したLLMを選択し、「Document」を有効にしてください。これにより、チャットボットは複雑な設定なしでファイルの内容を理解し、利用できるようになります。 + +![](https://assets-docs.dify.ai/2024/11/9f0b7a3c67b58c0bd7926501284cbb7d.png) + +### デバッグとプレビュー + +エージェントアシスタントの編成が完了したら、アプリとして公開する前にデバッグとプレビューを行い、アシスタントのタスク完了効果を確認できます。 + +![デバッグとプレビュー](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/50f9f42ad312c9d37a7134562f346392.png) + +### アプリの公開 + +![アプリをWebアプリとして公開](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/1b5934f412cfe0d4c44bfa5ad1a5fc66.png) diff --git a/ja-jp/guides/application-orchestrate/app-toolkits/README.mdx b/ja-jp/guides/application-orchestrate/app-toolkits/README.mdx new file mode 100644 index 00000000..e511de1b --- /dev/null +++ b/ja-jp/guides/application-orchestrate/app-toolkits/README.mdx @@ -0,0 +1,63 @@ +--- +title: ツールキット +--- + + +**スタジオ -- アプリケーションオーケストレーション**内で**機能を追加する**をクリックし、アプリケーションツールボックスを開きます。 + +アプリケーションツールボックスは、Difyの[アプリケーション](../#application\_type)に対して様々な付加機能を提供します。 + +
+ + +
+ +### 会話のオープニング + +対話型アプリケーションでは、AIが最初の発言や質問を行います。開場のメッセージや質問を編集することで、ユーザーに質問を促し、アプリの背景を説明し、対話のハードルを下げることができます。 + +![会話のオープニング](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/app-toolkits/d12afc44dc5c0ffe83395fc00abb0039.png) + +### 次の質問の提案 + +次の質問の提案を設定すると、AIが前回の対話内容に基づいて3つの質問を生成し、次の対話を誘導します。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/app-toolkits/ff7d0c23ec0ff81a9a29b1451b1df546.png) + +### テキストから音声への変換 + +この機能をオンにすると、AIの返信内容を自然な音声に変換できます。 アプリケーションツールボックスに「テキストから音声へ」のボタンを押すと、この機能を使えます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/app-toolkits/a97a8ce736c8f067a2049a4a1934064d.png) + +### 音声からテキストへの変換 + +この機能をオンにすると、アプリ内で録音し、その音声を自動的にテキストに変換できます。 アプリケーションツールボックスに「音声からテキストへ」のボタンを押すと、この機能を使えます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/app-toolkits/323f5b74bd92d1dee9c374ebe0a9cd77.png) + +### 引用と帰属 + +この機能をオンにすると、大規模言語モデルがナレッジベースからの内容を引用して回答する際に、返信内容の下に具体的な引用段落情報(元の段落テキスト、段落番号、マッチ度など)を表示できます。 + +詳しい説明は[引用と帰属](../../knowledge-base/retrieval-test-and-citation.md#id-2-yin-yong-yu-gui-shu)をご覧ください。 + +### コンテンツレビュー + +AIアプリと対話する際には、内容の安全性、ユーザー体験、法律規制など、さまざまな厳しい要求があります。このような場合には「センシティブコンテンツレビュー」機能が必要で、エンドユーザーにより良い対話環境を提供します。 + +詳しい説明は[センシティブコンテンツレビュー](moderation-tool.md)をご覧ください。 + +### 注釈付き返信 + +注釈付き返信機能は、人工的に編集された注釈によって、アプリにカスタマイズされた高品質な質問応答能力を提供します。 + +詳しい説明は[注釈付き返信](../../biao-zhu/annotation-reply.md)をご覧ください。 diff --git a/ja-jp/guides/application-orchestrate/app-toolkits/moderation-tool.mdx b/ja-jp/guides/application-orchestrate/app-toolkits/moderation-tool.mdx new file mode 100644 index 00000000..0e2fea73 --- /dev/null +++ b/ja-jp/guides/application-orchestrate/app-toolkits/moderation-tool.mdx @@ -0,0 +1,30 @@ +--- +title: コンテンツモデレーション +--- + + +AIアプリケーションと対話する際、コンテンツの安全性、ユーザーエクスペリエンス、法律と規制など多方面で厳しい要件が求められます。このような場合、エンドユーザーにより良いインタラクティブ環境を提供するために「敏感語審査」機能が必要です。プロンプト編成ページで「機能を追加」をクリックし、下部のツールボックス「コンテンツ監査」を見つけます: + +![コンテンツ監査](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/app-toolkits/734ec4ebcf6b32ec1c350636565636bf.png) + +### 機能一:OpenAI モデレーション API の呼び出し + +OpenAI やほとんどの大規模言語モデル (LLM) 会社が提供するモデルには、暴力、性、違法行為などの議論を含むコンテンツを出力しないようにするためのコンテンツ審査機能が備わっています。OpenAI はこのコンテンツ審査機能を公開しており、詳細は [platform.openai.com](https://platform.openai.com/docs/guides/moderation/overview) を参照してください。今では Dify でも直接 OpenAI モデレーション API を呼び出すことができます。入力内容や出力内容を監査するには、対応する「プリセット応答」を入力するだけです。 + +![OpenAI モデレーション API](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/app-toolkits/50c16bbe962b02d3748c95d3b41cf31a.png) + +### 機能二:カスタムキーワード + +開発者は監査が必要な敏感語をカスタムキーワードとして設定できます。例えば「kill」をキーワードとして設定し、ユーザーが入力した際に監査動作を行い、プリセット応答内容として「The content is violating usage policies.」と設定します。予測される結果として、ユーザーが「kill」を含むテキストを入力すると、敏感語審査ツールが作動し、プリセット応答内容が返されます。 + +![キーワード](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/app-toolkits/7ae6e1462d020276506f87b9ea5db409.png) + +### 機能三:敏感語審査 モデレーション拡張 + +企業内部では異なる敏感語審査のメカニズムが存在することが多いです。企業が企業内ナレッジベースチャットボットなどのAIアプリケーションを開発する際、社員が入力したクエリ内容を敏感語審査する必要があります。このため、開発者は自社の敏感語審査メカニズムに基づいて API 拡張を作成することができます。詳細は [moderation.md](../../extension/api-based-extension/moderation.md "mention") を参照してください。これにより、Dify 上で呼び出し、高度なカスタマイズとプライバシー保護を実現することができます。 + +![モデレーション設定](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/app-toolkits/520b5b7be1dc02c5b874b55a15e57f45.png) + +例えば、私たちのローカルサービスで、`ドナルド・ジョン・トランプ`という敏感語審査ルールをカスタマイズします。ユーザーが`query`変数に「トランプ」と入力すると、対話時に "貴社のご使用ポリシーに反するコンテンツとなっております。" という応答が返されます。テスト結果は以下の通りです: + +![モデレーションテスト](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/app-toolkits/66cae99e32d12f1b5b168d1b10eb1a63.png) diff --git a/ja-jp/guides/application-orchestrate/chatbot-application.mdx b/ja-jp/guides/application-orchestrate/chatbot-application.mdx new file mode 100644 index 00000000..a9bd83c0 --- /dev/null +++ b/ja-jp/guides/application-orchestrate/chatbot-application.mdx @@ -0,0 +1,90 @@ +--- +title: チャットボット +--- + + +チャットボットは、ユーザーとの継続的な対話を一問一答形式で行います。 + +### 適用シーン + +チャットボットは、カスタマーサービス、オンライン教育、医療、金融サービスなどの分野で利用されることがあります。これらのアプリは、組織の業務の効率を向上させたり、人件費を削減したり、ユーザーエクスペリエンスを高めるのに寄与します。 + +### 編成方法 + +チャットボットの作成には、プロンプト、変数、コンテキスト、オープニングダイアログ、次の質問の提案などが含まれています。 + +ここでは、**面接官**用のアプリを例に使って、チャットボットの編成方法を紹介します。 + +#### アプリの作成 + +ホームページで「最初から作成」をクリックしてアプリを作成します。アプリ名を入力し、アプリタイプは**チャットボット**を選択します。 + +![チャットボットの作成](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/24a6f30145d5005413fa75693996377c.png) + +#### アプリの編成 + +アプリを作成すると、自動的にアプリの概要ページに移動します。左側のメニューから編成をクリックしてアプリを編成します。 + +![アプリの編成](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/4332c94f126ae1ce72e1881e91d81729.png) + +**プロンプトの記入** + +プロンプトは、AIが専門的な回答を行う範囲を制限し、回答をより正確にします。組み込みのプロンプトジェネレータを使用して、適切なプロンプトを作成することができます。プロンプト内には、たとえば `{{input}}` のようなフォーム変数を挿入することができます。変数内の値は、ユーザーが入力した値に置き換えられます。 + +例: + +1. インタビューシナリオの指示を入力します。 +2. プロンプトが自動的に右側の内容欄に生成されます。 +3. カスタム変数をプロンプトに挿入することで、特定の要望や詳細に応じてカスタマイズが可能です。 + +ユーザーエクスペリエンスを向上させるために、オープニングダイアログを追加することができます:`こんにちは、{{name}}さん。私はあなたの面接官、Bobです。準備はできていますか?`。ページ下部の「機能の追加」ボタンをクリックして、「オープニングダイアログ」機能を開きます + +オープニングダイアログを追加する方法は、底の「機能を追加」ボタンをクリックして、「会話の開始」機能を開きます: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/8ee49c47ed4b197e77d88e95b5bf4b2a.png) + +オープニングステートメントを編集する際に、いくつかのオープニング質問を追加することもできます: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/4bbcbecba6f3d826b9f7d0bf1d5e2079.png) + +#### コンテキストの追加 + +AIの対話範囲を[ナレッジベース](../knowledge-base/)内に制限したい場合、企業内のカスタマーサービス用語規準などを「コンテキスト」で参照することができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/dd33b25175c48bf0776ad32e10ad9f33.png) + +### ファイルのアップロード + +[Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support) や [Gemini 1.5 Pro](https://ai.google.dev/api/files) など、一部のLLMはファイル処理に標準対応しています。各LLMのウェブサイトで、ファイルのアップロード機能について詳しくご確認ください。 + +ファイルの読み込みに対応したLLMを選択し、「Document」を有効にしてください。これにより、チャットボットは複雑な設定なしでファイルの内容を理解し、利用できるようになります。 + +![](https://assets-docs.dify.ai/2024/11/823399d85e8ced5068dc9da4f693170e.png) + +#### デバッグ + +右側にユーザー入力項目を入力し、内容を入力してデバッグします。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/58a59cfd677990426d6059008510b7ce.png) + +回答結果が望ましくない場合は、プロンプトやモデルを調整することができます。また、複数のモデルを同期してデバッグすることもでき、適切な構成を組み合わせることができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/96c97bea71df343ddc8dc013602e8a32.jpeg) + +単一モデルでのデバッグが効率的ではない場合、[**「複数のモデルでのデバッグ」**](./multiple-llms-debugging.md)機能を使用して、複数のモデルの回答効果を一括確認することもできます。 + +#### アプリの公開 + +アプリのデバッグが完了したら、右上の**公開**ボタンをクリックして独立したAIアプリを生成します。公開URLを使用してアプリを体験するだけでなく、APIベースの開発やWebサイトへの組み込みなども行うことができます。詳細については[公開](../application-publishing/README.md)を参照してください。 + +公開されたアプリをカスタマイズしたい場合は、当社のオープンソースの[WebAppテンプレート](https://github.com/langgenius/webapp-conversation)をForkしてください。テンプレートをベースに、シチュエーションやスタイルに合わせたアプリを作成できます。 + +### よくある質問 + +**チャットボット内にサードパーティツールを追加するにはどうすればよいですか?** + +チャットボットアプリは、サードパーティツールの追加をサポートしていません。[エージェント](../application-orchestrate/agent.md) 内でサードパーティツールを追加できます。 + +**チャットボット作成時のメタデータフィルタリング活用方法とは? ** + +詳細な手順は、[アプリ内でのナレッジベース統合](https://docs.dify.ai/ja-jp/guides/knowledge-base/integrate-knowledge-within-application)の「**メタデータフィルタリング → チャットボット**」をご確認ください。 diff --git a/ja-jp/guides/application-orchestrate/creating-an-application.mdx b/ja-jp/guides/application-orchestrate/creating-an-application.mdx new file mode 100644 index 00000000..a7adbb67 --- /dev/null +++ b/ja-jp/guides/application-orchestrate/creating-an-application.mdx @@ -0,0 +1,56 @@ +--- +title: アプリ作成 +--- + + +Difyスタジオでは、アプリケーションを作成するにあたり以下の3つの方法があります: + +* アプリケーションテンプレートを使用(初心者におすすめ) +* 最初からアプリケーションを作成 +* DSLファイルをインポートして作成(ローカル・オンライン) + +### テンプレートを使ってアプリケーションを作成 + +Difyを初めて使う場合、アプリケーション制作に不慣れなこともあるでしょう。そのため、Difyチームは様々な用途に対応する高品質なテンプレートを提供しており、これによりDifyで作成可能なアプリケーションの種類を迅速に把握できます。 + +ナビゲーションメニューより「スタジオ」を選んだ後、「テンプレートから作成」をアプリケーションリストから選択してください。 + +![テンプレートからアプリケーションを作成](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/448760da7b5f22f9e6bee7fd47e33a90.png) + +好みのテンプレートを選択し、**このテンプレートを使用する**ボタンをクリックします。 + +### 新しいアプリケーションの作成 + +Difyで最初からアプリケーションを作成する場合、ナビゲーションメニューから「スタジオ」を選び、「最初から作成」をアプリケーションリストで選択します。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/9f5d811f6e117d4f9c11c9128e44d3f7.png) + +Difyは、チャットボイス、テキストジェネレータ、エージェント、ワークフロー、チャットフローという5つの異なる種類のアプリケーションがあります。 + +アプリケーションを作成する際には、名前を付け、適切なアイコンを選択し、このアプリケーションの目的を簡潔に説明することで、チーム内での使用を容易にします。 + +![アプリケーションを最初から作成](https://assets-docs.dify.ai/2024/12/b0598446c2e129047aa7f4f06f2bf74d.png) + +### DSLファイルから作成 + + +Dify DSLは、Dify.AIが定めるAIアプリケーション開発のための標準ファイルフォーマット(YML)です。この標準には、アプリケーションの基本情報、モデルのパラメータ、オーケストレーションの設定などが含まれます。 + + +#### ローカルのDSLファイルをインポート + +コミュニティや他者から提供されたDSLファイル(テンプレート)を持っている場合は、「DSLファイルをインポート」をスタジオから選択してください。インポート後、元のアプリケーションの設定が直接読み込まれます。 + +![DSLファイルをインポートしてアプリケーションを作成](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/188ed099d344761e396510031f991dcc.png) + +#### URLを通じてDSLファイルをインポート + +以下の形式を使用して、URL経由でDSLファイルをインポートすることができます: + +```url +https://example.com/your_dsl.yml +``` + +![URL経由でDSLファイルをインポート](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-orchestrate/661112195d6fea2437ffa0dfe94436cb.png) + +> DSLファイルを取り込む際には、バージョンが自動で確認されます。バージョン間に大きな違いがあると、互換性に問題が生じる恐れがあります。この件につきましては、[アプリケーション管理:インポート](https://docs.dify.ai/guides/management/app-management#importing-application)セクションで詳細をご覧いただけます。 diff --git a/ja-jp/guides/application-orchestrate/multiple-llms-debugging.mdx b/ja-jp/guides/application-orchestrate/multiple-llms-debugging.mdx new file mode 100644 index 00000000..d0a40c9b --- /dev/null +++ b/ja-jp/guides/application-orchestrate/multiple-llms-debugging.mdx @@ -0,0 +1,25 @@ +--- +title: 複数モデルのデバッグ +--- + +**チャットボット**アプリは、**「複数モデルでデバッグ」**機能をサポートしており、同じ質問に対する異なるモデルの応答を同時に比較できます。 + +![](https://assets-docs.dify.ai/2025/02/6b77275258e6d48a540251beed0d16f3.png) + +一度に最大**4つ**の大規模言語モデルを追加できます。 + +![](https://assets-docs.dify.ai/2025/02/226b016ca72a7a914fe94f8b804d1334.png) + +デバッグ中に、優れたパフォーマンスを発揮するモデルが見つかった場合は、**「単一モデルでデバッグ」**をクリックすると、そのモデル専用のプレビューウィンドウに切り替わります。 + +![](https://assets-docs.dify.ai/2025/02/d273dee2ec4c04f7208a4f7a8b3a86db.png) + +## よくある質問 + +### 1. LLMを追加する際に、モデルが一覧に表示されないのはなぜですか? + +[「モデルプロバイダー」](https://docs.dify.ai/guides/model-configuration)にアクセスし、指示に従って複数のモデルのキーを手動で追加してください。 + +### 2. 複数モデルデバッグモードを終了するにはどうすればよいですか? + +いずれかのモデルを選択し、**「単一モデルでデバッグ」** をクリックすると、複数モデルデバッグモードを終了できます。 diff --git a/ja-jp/guides/application-publishing/README.mdx b/ja-jp/guides/application-publishing/README.mdx new file mode 100644 index 00000000..4df8be32 --- /dev/null +++ b/ja-jp/guides/application-publishing/README.mdx @@ -0,0 +1,20 @@ +--- +title: ウェブアプリの発表 +--- + + + + シングルページWebアプリとして公開 + + + + Webサイトへの埋め込み + + + + API基づく開発 + + + + フロントエンドテンプレートに基づいた再開発 + \ No newline at end of file diff --git a/ja-jp/guides/application-publishing/based-on-frontend-templates.mdx b/ja-jp/guides/application-publishing/based-on-frontend-templates.mdx new file mode 100644 index 00000000..98f88734 --- /dev/null +++ b/ja-jp/guides/application-publishing/based-on-frontend-templates.mdx @@ -0,0 +1,43 @@ +--- +title: フロントエンドテンプレートに基づいた再開発 +--- + + +もし開発者が新製品をゼロから開発する場合や製品プロトタイプ設計の段階にある場合、Difyを使用して迅速にAIサイトをリリースすることができます。同時に、Difyは開発者がさまざまな形式のフロントエンドアプリを自由に創造できることを望んでおり、そのため以下を提供しています: + +* **SDK**:さまざまな言語でDify APIに迅速に接続するためのもの +* **WebAppテンプレート**:各種アプリケーションのためのWebApp開発用スキャフォルディング + +WebAppテンプレートはMITライセンスに基づいてオープンソース化されていますので、自由に変更してデプロイすることができ、Difyのすべての機能を実現することができます。また、自分のアプリを実現するための参考コードとしても使用できます。 + +これらのテンプレートはGitHubで見つけることができます: + +* [会話型アプリケーション](https://github.com/langgenius/webapp-conversation) +* [テキスト生成型アプリケーション](https://github.com/langgenius/webapp-text-generator) + +WebAppテンプレートを使用する最も簡単な方法は、GitHubで「このテンプレートを使用」をクリックすることです。これにより、新しいリポジトリがフォークされます。その後、DifyのアプリIDとAPIキーを以下のように設定する必要があります: + +````javascript +export const APP_ID = '' +export const API_KEY = '' +``` + +`config/index.ts`での詳細な設定: +```js +export const APP_INFO: AppInfo = { + "title": 'Chat APP', + "description": '', + "copyright": '', + "privacy_policy": '', + "default_language": 'zh-Hans' +} + +export const isShowPrompt = true +export const promptTemplate = '' +```` + +> アプリIDはアプリのURLの中から取得できます。URLの長い文字列の部分がユニークなアプリIDです。 + +各WebAppテンプレートにはREADMEファイルが含まれており、デプロイ方法の説明が記載されています。通常、WebAppテンプレートには軽量バックエンドサービスが含まれており、これは開発者のAPIキーがユーザーに直接露出しないようにするためのものです。 + +これらのWebAppテンプレートは、AIアプリプロトタイプを迅速に構築し、Difyのすべての機能を使用するのに役立ちます。もしこれらを基に自分のアプリや新しいテンプレートを開発した場合、ぜひ私たちと共有してください。 \ No newline at end of file diff --git a/ja-jp/guides/application-publishing/developing-with-apis.mdx b/ja-jp/guides/application-publishing/developing-with-apis.mdx new file mode 100644 index 00000000..43982735 --- /dev/null +++ b/ja-jp/guides/application-publishing/developing-with-apis.mdx @@ -0,0 +1,130 @@ +--- +title: API 基づく開発 +--- + + +Difyは、「**後端即サービス(Backend as a Service)**」の理念に基づいて、すべてのアプリケーションにAPIを提供し、AIアプリケーション開発者に多くの利便性をもたらしています。この理念を通じて、開発者は複雑なバックエンドアーキテクチャやデプロイプロセスを気にすることなく、フロントエンドアプリケーションで大型言語モデル(LLM)の強力な能力を直接利用できます。 + +### Dify API を使用する利点 + +* フロントエンドアプリケーションが直接安全にLLMの能力を呼び出すことができ、バックエンドサービスの開発プロセスを省略 +* 視覚的なインターフェースでアプリケーションを設計し、すべてのクライアントにリアルタイムで反映 +* LLMプロバイダーの基本能力を良好にパッケージ化 +* LLMプロバイダーをいつでも切り替え、LLMのAPIキーを集中管理 +* 視覚的なインターフェースでアプリケーションを運営、例えばログの分析、ラベリング、ユーザーの活性度の観察 +* アプリケーションに対して継続的により多くのツール能力、プラグイン能力、データセットを提供 + +### 利用方法 + +アプリケーションを選択し、アプリケーション(Apps)の左側ナビゲーションで**APIアクセス(API Access)**を見つけます。このページでDifyが提供するAPIドキュメントを確認し、APIにアクセスするための認証情報を管理できます。 + +![APIアクセス](../../../en/.gitbook/assets/guides\application-publishing\launch-your-webapp-quickly/API Access.png) + +例えば、あなたがコンサルティング会社の開発部門であれば、会社のプライベートデータベースに基づいてAI能力をエンドユーザーや開発者に提供できますが、開発者はあなたのデータやAIロジック設計を把握することはできません。これにより、サービスは安全かつ持続可能に提供され、商業目的を満たすことができます。 + + +ベストプラクティスとして、APIキーはバックエンドで呼び出されるべきで、フロントエンドコードやリクエストに平文で直接露出しないようにしてください。これにより、アプリケーションの悪用や攻撃を防ぐことができます。 + + +アプリケーションに対して**複数のアクセス認証情報**を作成し、異なるユーザーや開発者に提供することができます。これにより、APIの使用者はアプリケーション開発者が提供するAI能力を使用できますが、その背後のプロンプトエンジニアリング、データセット、ツール能力はパッケージ化されています。 + +### テキスト生成型アプリケーション + +高品質なテキスト生成に使用できるアプリケーション、例えば記事生成、要約、翻訳などが含まれます。completion-messagesエンドポイントを呼び出し、ユーザー入力を送信して生成されたテキスト結果を取得します。テキスト生成に使用されるモデルパラメータとプロンプトテンプレートは、Difyのプロンプト編成ページで開発者が設定したものに依存します。 + +**アプリケーション -> APIアクセス**でそのアプリケーションのAPIドキュメントとサンプルリクエストを見つけることができます。 + +例えば、テキスト補完情報のAPIの呼び出し例: + + + + ``` +curl --location --request POST 'https://api.dify.ai/v1/completion-messages' \ +--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "inputs": {}, + "response_mode": "streaming", + "user": "abc-123" +}' +``` + + + ```python +import requests +import json + +url = "https://api.dify.ai/v1/completion-messages" + +headers = { + 'Authorization': 'Bearer ENTER-YOUR-SECRET-KEY', + 'Content-Type': 'application/json', +} + +data = { + "inputs": {"text": 'Hello, how are you?'}, + "response_mode": "streaming", + "user": "abc-123" +} + +response = requests.post(url, headers=headers, data=json.dumps(data)) + +print(response.text) +``` + + + +### 会話型アプリケーション + +大部分のシーンで使用できる会話型アプリケーションは、一問一答形式でユーザーと継続的に会話します。会話を開始するにはchat-messagesエンドポイントを呼び出し、返されたconversation\_idを引き続き提供することで会話を継続することができます。 + +#### `conversation_id` に関する重要事項: + +- **`conversation_id` の生成:** 新しい会話を開始するときは、`conversation_id` フィールドを空のままにしておきます。システムは新しい `conversation_id` を生成して返します。この新しい `conversation_id` は、今後のやり取りで使用して会話を続行します。 +- **既存のセッションでの `conversation_id` の処理:** `conversation_id` が生成されると、Dify ボットとの会話の継続性を確保するために、今後の API 呼び出しにこの `conversation_id` を含める必要があります。以前の `conversation_id` が渡されると、新しい `inputs` は無視されます。進行中の会話では `query` のみが処理されます。 +- **動的変数の管理:** セッション中にロジックまたは変数を変更する必要がある場合は、会話変数 (セッション固有の変数) を使用してボットの動作または応答を調整できます。 + +**アプリケーション -> APIアクセス**でそのアプリケーションのAPIドキュメントとサンプルリクエストを見つけることができます。 + +以下は`chat-messages`のAPIの呼び出し例: + + + + ``` +curl --location --request POST 'https://api.dify.ai/v1/chat-messages' \ +--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "inputs": {}, + "query": "eh", + "response_mode": "streaming", + "conversation_id": "1c7e55fb-1ba2-4e10-81b5-30addcea2276", + "user": "abc-123" +}' + +``` + + + ```python +import requests +import json + +url = 'https://api.dify.ai/v1/chat-messages' +headers = { + 'Authorization': 'Bearer ENTER-YOUR-SECRET-KEY', + 'Content-Type': 'application/json', +} +data = { + "inputs": {}, + "query": "eh", + "response_mode": "streaming", + "conversation_id": "1c7e55fb-1ba2-4e10-81b5-30addcea2276", + "user": "abc-123" +} + +response = requests.post(url, headers=headers, data=json.dumps(data)) + +print(response.text()) +``` + + \ No newline at end of file diff --git a/ja-jp/guides/application-publishing/embedding-in-websites.mdx b/ja-jp/guides/application-publishing/embedding-in-websites.mdx new file mode 100644 index 00000000..fb2cde86 --- /dev/null +++ b/ja-jp/guides/application-publishing/embedding-in-websites.mdx @@ -0,0 +1,139 @@ +--- +title: Webサイトへの埋め込み +--- + + +Dify Apps は iframe を使用してWebサイトに埋め込むことができます。これにより、Dify App をWebサイト、ブログ、またはその他のウェブページに統合できます。 + +Dify Chatbot Bubble Button をWebサイトに埋め込む際に、ボタンのスタイル、位置、その他の設定をカスタマイズできます。 + +## Dify Chatbot Bubble Button のカスタマイズ + +Dify Chatbot Bubble Button は、以下の設定オプションでカスタマイズできます。 + +```javascript +window.difyChatbotConfig = { + // 必須:Dify によって自動的に生成されます + token: 'YOUR_TOKEN', + // オプション:デフォルトは false です + isDev: false, + // オプション:isDev が true の場合、デフォルトは '[https://dev.udify.app](https://dev.udify.app)'、それ以外の場合は '[https://udify.app](https://udify.app)' です + baseUrl: 'YOUR_BASE_URL', + // オプション:`id` 以外の有効な HTMLElement 属性(例:`style`、`className` など)を受け入れます + containerProps: {}, + // オプション:ボタンのドラッグを許可するかどうか、デフォルトは `false` です + draggable: false, + // オプション:ボタンのドラッグを許可する軸、デフォルトは 'both'、'x'、'y'、'both' のいずれかを指定できます + dragAxis: 'both', + // オプション:dify チャットボットに設定されている入力オブジェクト + inputs: { + // key は変数名です + // 例: + // name: "NAME" + } +}; +``` + +## デフォルトのボタンスタイルの上書き + +CSS 変数または `containerProps` オプションを使用して、デフォルトのボタンスタイルを上書きできます。CSSの優先度に基づいてこれらの方法を適用し、希望のカスタマイズを実現します。 + +### 1.CSS 変数の変更 + +以下の CSS 変数をカスタマイズに使用できます。 + +```css +/* ボタンの下端からの距離、デフォルトは `1rem` */ +--dify-chatbot-bubble-button-bottom + +/* ボタンの右端からの距離、デフォルトは `1rem` */ +--dify-chatbot-bubble-button-right + +/* ボタンの左端からの距離、デフォルトは `unset` */ +--dify-chatbot-bubble-button-left + +/* ボタンの上端からの距離、デフォルトは `unset` */ +--dify-chatbot-bubble-button-top + +/* ボタンの背景色、デフォルトは `#155EEF` */ +--dify-chatbot-bubble-button-bg-color + +/* ボタンの幅、デフォルトは `50px` */ +--dify-chatbot-bubble-button-width + +/* ボタンの高さ、デフォルトは `50px` */ +--dify-chatbot-bubble-button-height + +/* ボタンの角丸、デフォルトは `25px` */ +--dify-chatbot-bubble-button-border-radius + +/* ボタンのボックスシャドウ、デフォルトは `rgba(0, 0, 0, 0.2) 0px 4px 8px 0px)` */ +--dify-chatbot-bubble-button-box-shadow + +/* ボタンホバー時の変形、デフォルトは `scale(1.1)` */ +--dify-chatbot-bubble-button-hover-transform +``` + +例えば、ボタンの背景色を #ABCDEF に変更するには、次の CSS を追加します。 + +```css +#dify-chatbot-bubble-button { + --dify-chatbot-bubble-button-bg-color: #ABCDEF; +} +``` + +### 2.`containerProps` を使用する + +`style` 属性を使用してインラインスタイルを設定します。 + +```javascript +window.difyChatbotConfig = { + // ... 他の設定 + containerProps: { + style: { + backgroundColor: '#ABCDEF', + width: '60px', + height: '60px', + borderRadius: '30px', + }, + // ちょっとしたスタイル変更の場合、style 属性に文字列を使用することもできます。 + // style: 'background-color: #ABCDEF; width: 60px;', + }, +}; +``` + +`className` 属性を使用して CSS クラスを適用します: + +```javascript +window.difyChatbotConfig = { + // ... 他の設定 + containerProps: { + className: 'dify-chatbot-bubble-button-custom my-custom-class', + }, +}; +``` + +### 3. `inputs` の渡し方 + +サポートされている入力タイプは4種類あります: + +1. **`text-input`**:任意の値を受け入れます。入力文字列の長さが許容される最大長を超える場合、切り詰められます。 +2. **`paragraph`**:`text-input` と同様に、任意の値を受け入れ、文字列が最大長を超える場合には切り詰められます。 +3. **`number`**:数値または数値の文字列を受け入れます。文字列が提供された場合、`Number` 関数を使用して数値に変換されます。 +4. **`options`**:事前に設定されたオプションのいずれかと一致する値を受け入れます。 + +設定例: + +```javascript +window.difyChatbotConfig = { + // 他の設定項目... + inputs: { + name: 'apple', + }, +} +``` + +注意: `embed.js` スクリプトを使用してiframeを作成する場合、各入力値はURLに追加される前にGZIPで圧縮され、base64でエンコードされます。 + +例えば、処理された入力値を含むURLは以下のようになります: +`http://localhost/chatbot/{token}?name=H4sIAKUlmWYA%2FwWAIQ0AAACDsl7gLuiv2PQEUNAuqQUAAAA%3D` \ No newline at end of file diff --git a/ja-jp/guides/application-publishing/launch-your-webapp-quickly/README.mdx b/ja-jp/guides/application-publishing/launch-your-webapp-quickly/README.mdx new file mode 100644 index 00000000..221564fe --- /dev/null +++ b/ja-jp/guides/application-publishing/launch-your-webapp-quickly/README.mdx @@ -0,0 +1,59 @@ +--- +title: シングルページWebアプリとして公開 +--- + + +Difyを使ってAIアプリを作成するメリットの一つは、数分でユーザー向けのWebアプリを公開できることです。このアプリはあなたのプロンプトに基づいて機能します。 + +* 自己ホストのオープンソース版を使用する場合、そのアプリはあなたのサーバー上で動作します +* クラウドサービスを使用する場合、そのアプリはUdify.appにホストされます + +*** + +### AIサイトの公開 + +アプリ概要ページで、AIサイト(Webアプリ)に関するカードを見つけることができます。Webアプリのアクセスをオンにするだけで、ユーザーと共有できるリンクが得られます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/56093185a61265109cee9f2e00420ff4.png) + +以下の2種類のアプリには、きれいなWebアプリのインターフェースを提供しています: + +* テキスト生成型([前往预览](text-generator.md)) +* 対話型([前往预览](conversation-application.md)) + +*** + +### AIサイトの設定 + +Webアプリのカード上の**設定**ボタンをクリックすると、AIサイトのオプションをいくつか設定できます。これらは最終ユーザーに表示されます: + +* アイコン +* 名前 +* アプリの説明 +* インターフェース言語 +* 著作権情報 +* プライバシーポリシーリンク +* カスタム免責事項 + + +現在サポートされているインターフェース言語:英語、中国語(簡体字)、中国語(繁体字)、ポルトガル語、ドイツ語、日本語、韓国語、ウクライナ語、ベトナム語。追加の言語が必要な場合は、GitHubでイシューを提出するか、プルリクエストを提出してコードを提供してください。[サポートを求める](../../../community/support.md)。 + + +*** + +### AIサイトの埋め込み + +Difyは、あなたのAIアプリをビジネスWebサイトに埋め込むことをサポートしています。この機能を使えば、数分でビジネスデータを持つ公式サイトのAIカスタマーサポートやビジネス知識Q&Aなどのアプリを作成できます。Webアプリのカード上の埋め込みボタンをクリックし、埋め込みコードをコピーして、Webサイトの目標位置に貼り付けます。 + +* iframeタグの方法 + + iframeコードを、AIアプリを表示するあなたのWebサイトのタグ(例:`
`、`
`)にコピーします。 +* scriptタグの方法 + + scriptコードを、あなたのWebサイトの``または``タグにコピーします。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/1e2e8c9adc620f9d6b4ea157119e8659.png) + +例えば、scriptコードを公式サイトの``に貼り付けると、公式サイトのAIロボットが得られます: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/f5a5e95e1120906669b3c1ad4e186dea.png) \ No newline at end of file diff --git a/ja-jp/guides/application-publishing/launch-your-webapp-quickly/conversation-application.mdx b/ja-jp/guides/application-publishing/launch-your-webapp-quickly/conversation-application.mdx new file mode 100644 index 00000000..d7ad2bb4 --- /dev/null +++ b/ja-jp/guides/application-publishing/launch-your-webapp-quickly/conversation-application.mdx @@ -0,0 +1,53 @@ +--- +title: 会話型アプリケーション +--- + + +会話型アプリケーションは、一問一答の形式でユーザーと継続的に会話を行います。会話型アプリケーションは以下の機能をサポートします(アプリケーションの設定時にこれらの機能が有効になっていることを確認してください): + +* 会話前に入力する変数。 +* 会話の作成、ピン留め、削除。 +* 会話の冒頭。 +* 次のステップの質問の提案。 +* 音声認識。 +* 引用と帰属。 + +### 会話前に入力する変数 + +アプリケーションの設定時に変数入力を求める設定をしている場合、会話を始める前に指示に従って情報を入力する必要があります: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/8decae00eeea24622e1f2ef73d4c447e.png) + +必要な内容を入力し、「会話を開始」ボタンをクリックしてチャットを始めます。AIの回答に移動して、会話の内容をコピーしたり、回答に「いいね」や「悪いね」を付けたりできます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/5b7a6f950ed8a2ce3a705f362b4813fe.png) + +### 会話の作成、ピン留め、削除 + +「新しい会話」ボタンをクリックして新しい会話を開始します。会話に移動して、会話を「ピン留め」または「削除」することができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/46372ad4d79a3ea943d43f9434974956.png) + +### 会話のオープニング + +アプリケーションの設定時に「会話のオープニング」機能が有効になっている場合、新しい会話を作成するとAIアプリケーションが自動的に最初の会話を開始します: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/22e59e509296d25eb85cbd541e161c6d.png) + +### 次のステップの質問の提案 + +アプリケーションの設定時に「次のステップの質問の提案」機能が有効になっている場合、会話後にシステムが自動的に3つの関連する質問を提案します: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/f88a7ffd777d51299f8b604249c044b3.png) + +### 音声認識 + +アプリケーションの設定時に「音声認識」機能が有効になっている場合、Webアプリケーション端の入力欄に音声入力のアイコンが表示され、アイコンをクリックすることで音声入力が文字に変換されます: + +_使用するデバイス環境がマイクロフォンの使用を許可していることを確認してください。_ + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/3a64c79792f1166301403f6c44cf4c85.png) + +### 引用と帰属 + +アプリ内でナレッジベースの効果をテストする際、**ワークスペース -- 機能の追加 -- 引用と帰属** に移動し、引用と帰属機能をオンにすることができます。詳細については[「引用と帰属」](https://docs.dify.ai/v/japanese/guides/knowledge-base/retrieval_test_and_citation#id-2-yin-yong-yu-gui-shu)を参照してください。 \ No newline at end of file diff --git a/ja-jp/guides/application-publishing/launch-your-webapp-quickly/text-generator.mdx b/ja-jp/guides/application-publishing/launch-your-webapp-quickly/text-generator.mdx new file mode 100644 index 00000000..2ccfc998 --- /dev/null +++ b/ja-jp/guides/application-publishing/launch-your-webapp-quickly/text-generator.mdx @@ -0,0 +1,61 @@ +--- +title: テキスト生成型アプリケーション +--- + + +テキスト生成型アプリケーションは、ユーザーが提供するプロンプトに基づいて高品質のテキストを自動生成するアプリケーションです。記事の要約や翻訳など、さまざまなタイプのテキストを生成することができます。 + +テキスト生成型アプリケーションは以下の機能をサポートしています: + +1. 一回実行。 +2. バッチ実行。 +3. 実行結果の保存。 +4. より類似した結果の生成。 + +以下にそれぞれの機能を紹介します。 + +### 一度実行 + +クエリ内容を入力し、実行ボタンをクリックすると、右側に結果が生成されます。以下の図のように: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/4c5380cf71066d933082f7c30deacb01.png) + +生成された結果部分では、「コピー」ボタンをクリックすると内容をクリップボードにコピーできます。「保存」ボタンをクリックすると内容を保存できます。「保存済み」タブで保存した内容を見ることができます。また、生成された内容には「いいね」や「バッド」をつけることもできます。 + +### バッチ実行 + +時には、アプリケーションを何度も実行する必要があります。例えば、テーマに基づいて記事を生成するWebアプリケーションがあるとします。今、100種類のテーマに基づいて記事を生成する必要があるとします。この場合、このタスクを100回も行うのは非常に面倒です。また、1つのタスクが完了するのを待たなければ次のタスクを開始できません。 + +上記のシナリオでは、バッチ実行機能を使うと操作が便利になり(テーマを `csv` ファイルに入力し、一度だけ実行する)、生成時間も節約できます(複数のタスクが同時に実行される)。使用方法は以下の通りです: + +#### 第1歩 バッチ実行ページに入る + +「バッチ実行」タブをクリックすると、バッチ実行ページに入ります。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/c8381ab7fad14a54c86835dc4b1b6b5d.png) + +#### 第2歩 テンプレートをダウンロードして内容を入力する + +「テンプレートダウンロード」ボタンをクリックし、テンプレートをダウンロードします。テンプレートを編集し、内容を入力して `.csv` 形式のファイルとして保存します。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/bae4859c5cb7404ce901b7979237bb93.png) + +#### 第3歩 ファイルをアップロードして実行 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/fc84f62f41c12e14ff85b29e6bf43d27.png) + +生成された内容をエクスポートする必要がある場合は、右上の「ダウンロードボタン」をクリックして `csv` ファイルとしてエクスポートできます。 + +**注意:** アップロードする `csv` ファイルのエンコードは `ユニコード` でなければなりません。そうでないと、実行結果が失敗する可能性があります。解決策:ExcelやWPSなどで `csv` ファイルをエクスポートする際に、エンコードを `ユニコード` に選択します。 + +### 保存結果 + +生成結果の下にある「保存」ボタンをクリックすると、実行結果を保存できます。「保存済み」タブで、すべての保存された内容を見ることができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/3cdd15e87aa1f1aae9f6abadb0f16d1f.png) + +### より多くの類似結果の生成 + +アプリケーションのオーケストレーションで「より類似した」機能を有効にしている場合、Webアプリケーションで「より類似した」ボタンをクリックすると、現在の結果と似た内容を生成できます。以下の図の通りです: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/application-publishing/launch-your-webapp-quickly/65fb111d8e89a8f7b761859265e42f0a.png) \ No newline at end of file diff --git a/ja-jp/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.mdx b/ja-jp/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.mdx new file mode 100644 index 00000000..869eea03 --- /dev/null +++ b/ja-jp/guides/application-publishing/launch-your-webapp-quickly/web-app-settings.mdx @@ -0,0 +1,32 @@ +--- +title: 概覧 +--- + + +Web アプリはアプリのユーザーが使用するものです。アプリ開発者が Dify でアプリを作成すると、対応する Web アプリが取得されます。Web アプリのユーザーはログインせずに使用できます。Web アプリは異なるサイズのデバイスに対応しています:PC、タブレット、スマートフォン。 + +Web アプリの内容とアプリの公開設定は一致しています。アプリの設定を変更し、アプリのプロンプト編成ページで「公開」ボタンをクリックして公開すると、Web アプリの内容も現在のアプリの設定に基づいて更新されます。 + +アプリの概要ページで Web アプリへのアクセスを有効または無効にし、Web アプリのサイト情報を変更できます: + +* アイコン +* 名前 +* アプリの説明 +* インターフェース言語 +* 著作権情報 +* プライバシーポリシーリンク + +Web アプリの機能は、開発者がアプリを編成する際にその機能を有効にするかどうかによって異なります。例えば: + +* 会話のオープニング +* 会話前に入力する変数 +* 次の質問の提案 +* 音声からテキストへの変換 +* 引用と帰属 +* より類似した回答(テキスト型アプリ) +* ...... + +以下のセクションでは、Web アプリの2種類のタイプについて説明します: + +* テキスト生成型 +* 会話型 \ No newline at end of file diff --git a/ja-jp/user-guide/build-app/agent.mdx b/ja-jp/guides/build-app/agent.mdx similarity index 100% rename from ja-jp/user-guide/build-app/agent.mdx rename to ja-jp/guides/build-app/agent.mdx diff --git a/ja-jp/user-guide/build-app/chatbot.mdx b/ja-jp/guides/build-app/chatbot.mdx similarity index 100% rename from ja-jp/user-guide/build-app/chatbot.mdx rename to ja-jp/guides/build-app/chatbot.mdx diff --git a/ja-jp/user-guide/build-app/text-generator.mdx b/ja-jp/guides/build-app/text-generator.mdx similarity index 100% rename from ja-jp/user-guide/build-app/text-generator.mdx rename to ja-jp/guides/build-app/text-generator.mdx diff --git a/ja-jp/guides/knowledge-base/api-documentation/external-knowledge-api-documentation.mdx b/ja-jp/guides/knowledge-base/api-documentation/external-knowledge-api-documentation.mdx new file mode 100644 index 00000000..8aa966eb --- /dev/null +++ b/ja-jp/guides/knowledge-base/api-documentation/external-knowledge-api-documentation.mdx @@ -0,0 +1,161 @@ +--- +title: 外部知識API +--- + +## エンドポイント + +``` +POST /retrieval +``` + +## ヘッダー + +このAPIは、Difyとは独立して開発者が維持管理するナレッジベースに接続するために使用されます。詳細については、[外部ナレッジベースへの接続](https://docs.dify.ai/guides/knowledge-base/connect-external-knowledge-base)を参照してください。`Authorization` HTTPヘッダーで `API-Key` を使用して権限を検証できます。認証ロジックは、以下のように検索APIで定義します: + +``` +Authorization: Bearer {API_KEY} +``` + +## リクエストボディ要素 + +リクエストは以下のJSON形式のデータを受け入れます。 + +| プロパティ | 必須 | 型 | 説明 | 例値 | +|------------|------|-----|------|------| +| knowledge_id | TRUE | string | ナレッジベースの一意ID | AAA-BBB-CCC | +| query | TRUE | string | ユーザーのクエリ | Difyとは何ですか? | +| retrieval_setting | TRUE | object | 知識の検索パラメータ | 以下参照 | +| metadata_condition | TRUE | object | 元の配列のフィルタリング | 以下参照 | + +`retrieval_setting` プロパティは以下のキーを含むオブジェクトです: + +| プロパティ | 必須 | 型 | 説明 | 例値 | +|------------|------|-----|------|------| +| top_k | TRUE | int | 検索結果の最大数 | 5 | +| score_threshold | TRUE | float | クエリに対する結果の関連性スコアの閾値、範囲:0〜1 | 0.5 | + +`metadata_condition` プロパティは以下のキーを含むオブジェクトです: + +| 属性 | 必須かどうか | 型 | 説明 | 例 | +|------|----------|------|------|--------| +| logical_operator | いいえ | 文字列 | 論理演算子、値は `and` または `or`、デフォルトは `and` | and | +| conditions | はい | 配列(オブジェクト) | 条件リスト | 以下参照 | + +`conditions` 配列の各オブジェクトには以下のキーが含まれます: + +| 属性 | 必須かどうか | 型 | 説明 | 例 | +|------|----------|------|------|--------| +| name | はい | 配列(文字列) | フィルタリングするmetadataの名前 | `["category", "tag"]` | +| comparison_operator | はい | 文字列 | 比較演算子 | `contains` | +| value | いいえ | 文字列 | 比較値、演算子が `empty`、`not empty`、`null`、`not null` の場合は省略可能 | `"AI"` | + +サポートされている `comparison_operator` 演算子: + +- `contains`:特定の値を含む +- `not contains`:特定の値を含まない +- `start with`:特定の値で始まる +- `end with`:特定の値で終わる +- `is`:特定の値と等しい +- `is not`:特定の値と等しくない +- `empty`:空である +- `not empty`:空ではない +- `=`:等しい +- `≠`:等しくない +- `>`:より大きい +- `<`:より小さい +- `≥`:以上 +- `≤`:以下 +- `before`:特定の日付より前 +- `after`:特定の日付より後 + +## リクエスト構文 + +```json +POST /retrieval HTTP/1.1 +-- ヘッダー +Content-Type: application/json +Authorization: Bearer your-api-key +-- データ +{ + "knowledge_id": "your-knowledge-id", + "query": "your question", + "retrieval_setting":{ + "top_k": 2, + "score_threshold": 0.5 + } +} +``` + +## レスポンス要素 + +アクションが成功した場合、サービスはHTTP 200レスポンスを返します。 + +サービスは以下のデータをJSON形式で返します。 + +| プロパティ | 必須 | 型 | 説明 | 例値 | +|------------|------|-----|------|------| +| records | TRUE | List[Object] | ナレッジベースのクエリ結果のレコードリスト | 以下参照 | + +`records` プロパティは以下のキーを含むリストオブジェクトです: + +| プロパティ | 必須 | 型 | 説明 | 例値 | +|------------|------|-----|------|------| +| content | TRUE | string | ナレッジベースのデータソースからのテキストチャンクを含む | Dify:GenAIアプリケーションのイノベーションエンジン | +| score | TRUE | float | クエリに対する結果の関連性スコア、範囲:0〜1 | 0.5 | +| title | TRUE | string | ドキュメントタイトル | Dify紹介 | +| metadata | FALSE | json | データソース内のドキュメントのメタデータ属性とその値を含む | 例参照 | + +## レスポンス構文 + +```json +HTTP/1.1 200 +Content-type: application/json +{ + "records": [{ + "metadata": { + "path": "s3://dify/knowledge.txt", + "description": "dify知識ドキュメント" + }, + "score": 0.98, + "title": "knowledge.txt", + "content": "これは外部知識のドキュメントです。" + }, + { + "metadata": { + "path": "s3://dify/introduce.txt", + "description": "dify紹介" + }, + "score": 0.66, + "title": "introduce.txt", + "content": "GenAIアプリケーションのイノベーションエンジン" + } + ] +} +``` + +## エラー + +アクションが失敗した場合、サービスは以下のエラー情報をJSON形式で返します: + +| プロパティ | 必須 | 型 | 説明 | 例値 | +|------------|------|-----|------|------| +| error_code | TRUE | int | エラーコード | 1001 | +| error_msg | TRUE | string | API例外の説明 | 無効な認証ヘッダー形式です。`Bearer ` 形式が期待されます。 | + +`error_code` プロパティには以下の種類があります: + +| コード | 説明 | +|--------|------| +| 1001 | 無効な認証ヘッダー形式 | +| 1002 | 認証失敗 | +| 2001 | 知識が存在しません | + +### HTTPステータスコード + +**AccessDeniedException** +アクセス権限がないため、リクエストが拒否されました。権限を確認して再試行してください。 +HTTPステータスコード:403 + +**InternalServerException** +内部サーバーエラーが発生しました。リクエストを再試行してください。 +HTTPステータスコード:500 diff --git a/ja-jp/guides/knowledge-base/api-documentation/external-knowledge-api.json b/ja-jp/guides/knowledge-base/api-documentation/external-knowledge-api.json new file mode 100644 index 00000000..82a9653a --- /dev/null +++ b/ja-jp/guides/knowledge-base/api-documentation/external-knowledge-api.json @@ -0,0 +1,192 @@ +{ + "openapi": "3.0.1", + "info": { "title": "Dify-test", "description": "", "version": "1.0.0" }, + "tags": [], + "paths": { + "/retrieval": { + "post": { + "summary": "知识召回 API", + "deprecated": false, + "description": "该 API 用于连接团队内独立维护的知识库,如需了解更多操作指引,请参考阅读[连接外部知识库](/zh-cn/user-guide/knowledge-base/knowledge-base-creation/connect-external-knowledge-base)。你可以在 Authorization HTTP 头部中使用 API-Key 来验证权限,认证逻辑由开发者在检索 API 中定义,如下所示:\n\n```text\nAuthorization: Bearer {API_KEY}\n```", + "tags": [], + "requestBody": { + "content": { + "application/json": { + "schema": { + "type": "object", + "properties": { + "knowledge_id": { + "type": "string", + "description": "你的知识库唯一 ID" + }, + "query": { "type": "string", "description": "用户的提问" }, + "retrival_setting": { + "type": "object", + "properties": { + "top_k": { + "type": "integer", + "description": "检索结果的最大数量" + }, + "score_threshold": { + "type": "number", + "description": "结果与查询相关性的分数限制,范围 0~1", + "format": "float", + "minimum": 0, + "maximum": 1 + } + }, + "description": "知识库的检索参数", + "required": ["top_k", "score_threshold"] + } + }, + "required": ["knowledge_id", "query", "retrival_setting"] + }, + "example": { + "knowledge_id": "your-knowledge-id", + "query": "your question", + "retrival_setting": { "top_k": 2, "score_threshold": 0.5 } + } + } + } + }, + "responses": { + "200": { + "description": "如果操作成功,服务将发回 HTTP 200 响应。", + "content": { + "application/json": { + "schema": { + "type": "object", + "properties": { + "records": { + "type": "object", + "properties": { + "content": { + "type": "string", + "description": "包含知识库中数据源的一段文本。" + }, + "score": { + "type": "number", + "format": "float", + "description": "结果与查询相关性的分数,范围: 0~1" + }, + "title": { + "type": "string", + "description": " 文档标题" + }, + "metadata": { + "type": "string", + "description": "包含数据源中文档的元数据属性及其值。" + } + }, + "title": "从知识库查询的记录列表", + "required": ["content", "score", "title"] + } + }, + "required": ["records"] + }, + "examples": { + "1": { + "summary": "Success", + "value": { + "records": [ + { + "metadata": { + "path": "s3://dify/knowledge.txt", + "description": "dify 知识文档" + }, + "score": 0.98, + "title": "knowledge.txt", + "content": "外部知识的文件" + }, + { + "metadata": { + "path": "s3://dify/introduce.txt", + "description": "Dify 介绍" + }, + "score": 0.66, + "title": "introduce.txt", + "content": "The Innovation Engine for GenAI Applications" + } + ] + } + } + } + } + }, + "headers": {} + }, + "403": { + "description": "请求被拒绝因为缺失访问权限。请检查你的权限并在此发起请求。", + "content": { + "application/json": { + "schema": { + "title": "", + "type": "object", + "properties": { + "error_code": { + "type": "integer", + "description": "错误码" + }, + "error_msg": { + "type": "string", + "description": "API 异常描述" + } + }, + "required": ["error_code", "error_msg"] + }, + "examples": { + "1": { + "summary": "Erros", + "value": { + "error_code": 1001, + "error_msg": "无效的鉴权头格式,预期应为 'Bearer ' 格式。" + } + } + } + } + }, + "headers": {} + }, + "500": { + "description": "发生了内部服务器错误,请再次连接。", + "content": { + "application/json": { + "schema": { + "title": "", + "type": "object", + "properties": { + "error_code": { + "type": "integer", + "description": "错误码" + }, + "error_msg": { + "type": "string", + "description": " API 异常描述" + } + }, + "required": ["error_code", "error_msg"] + }, + "examples": { + "1": { + "summary": "Erros", + "value": { + "error_code": 1001, + "error_msg": "Invalid Authorization header format. Expected 'Bearer ' format." + } + } + } + } + }, + "headers": {} + } + }, + "security": [{ "bearer": [] }] + } + } + }, + "components": { + "schemas": {}, + "securitySchemes": { "bearer": { "type": "http", "scheme": "bearer" } } + }, + "servers": [{ "url": "your-endpoint", "description": "test-environment" }] +} diff --git a/ja-jp/guides/knowledge-base/api-documentation/maintain-dataset-via-api.mdx b/ja-jp/guides/knowledge-base/api-documentation/maintain-dataset-via-api.mdx new file mode 100644 index 00000000..9a8852a6 --- /dev/null +++ b/ja-jp/guides/knowledge-base/api-documentation/maintain-dataset-via-api.mdx @@ -0,0 +1,550 @@ +--- +title: 通过 API 维护知识库 +version: '简体中文' +--- + +> 鉴权、调用方式与应用 Service API 保持一致,不同之处在于,所生成的单个知识库 API token 具备操作当前账号下所有可见知识库的权限,请注意数据安全。 + +### 使用知识库 API 的优势 + +通过 API 维护知识库可大幅提升数据处理效率,你可以通过命令行轻松同步数据,实现自动化操作,而无需在用户界面进行繁琐操作。 + +主要优势包括: + +* 自动同步: 将数据系统与 Dify 知识库无缝对接,构建高效工作流程; +* 全面管理: 提供知识库列表,文档列表及详情查询等功能,方便你自建数据管理界面; +* 灵活上传: 支持纯文本和文件上传方式,可针对分段(Chunks)内容的批量新增和修改操作; +* 提高效率: 减少手动处理时间,提升 Dify 平台使用体验。 + +### 如何使用 + +进入知识库页面,在左侧的导航中切换至 **API** 页面。在该页面中你可以查看 Dify 提供的知识库 API 文档,并可以在 **API 密钥** 中管理可访问知识库 API 的凭据。 + +![](https://assets-docs.dify.ai/2025/03/82ef51dc6886fb8301a2b85a920b12d0.png) + +### API 调用示例 + +#### 通过文本创建文档 + +输入示例: + +```json +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/document/create_by_text' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' \ +--data-raw '{"name": "text","text": "text","indexing_technique": "high_quality","process_rule": {"mode": "automatic"}}' +``` + +输出示例: + +```json +{ + "document": { + "id": "", + "position": 1, + "data_source_type": "upload_file", + "data_source_info": { + "upload_file_id": "" + }, + "dataset_process_rule_id": "", + "name": "text.txt", + "created_from": "api", + "created_by": "", + "created_at": 1695690280, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false, + "display_status": "queuing", + "word_count": 0, + "hit_count": 0, + "doc_form": "text_model" + }, + "batch": "" +} +``` + +#### 通过文件创建文档 + +输入示例: + +```json +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/document/create_by_file' \ +--header 'Authorization: Bearer {api_key}' \ +--form 'data="{"indexing_technique":"high_quality","process_rule":{"rules":{"pre_processing_rules":[{"id":"remove_extra_spaces","enabled":true},{"id":"remove_urls_emails","enabled":true}],"segmentation":{"separator":"###","max_tokens":500}},"mode":"custom"}}";type=text/plain' \ +--form 'file=@"/path/to/file"' +``` + +输出示例: + +```json +{ + "document": { + "id": "", + "position": 1, + "data_source_type": "upload_file", + "data_source_info": { + "upload_file_id": "" + }, + "dataset_process_rule_id": "", + "name": "Dify.txt", + "created_from": "api", + "created_by": "", + "created_at": 1695308667, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false, + "display_status": "queuing", + "word_count": 0, + "hit_count": 0, + "doc_form": "text_model" + }, + "batch": "" +} + +``` + +#### **创建空知识库** + + + 仅用来创建空知识库 + + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' \ +--data-raw '{"name": "name", "permission": "only_me"}' +``` + +输出示例: + +```json +{ + "id": "", + "name": "name", + "description": null, + "provider": "vendor", + "permission": "only_me", + "data_source_type": null, + "indexing_technique": null, + "app_count": 0, + "document_count": 0, + "word_count": 0, + "created_by": "", + "created_at": 1695636173, + "updated_by": "", + "updated_at": 1695636173, + "embedding_model": null, + "embedding_model_provider": null, + "embedding_available": null +} +``` + +#### **知识库列表** + +输入示例: + +```bash +curl --location --request GET 'https://api.dify.ai/v1/datasets?page=1&limit=20' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +{ + "data": [ + { + "id": "", + "name": "知识库名称", + "description": "描述信息", + "permission": "only_me", + "data_source_type": "upload_file", + "indexing_technique": "", + "app_count": 2, + "document_count": 10, + "word_count": 1200, + "created_by": "", + "created_at": "", + "updated_by": "", + "updated_at": "" + }, + ... + ], + "has_more": true, + "limit": 20, + "total": 50, + "page": 1 +} +``` + +#### 删除知识库 + +输入示例: + +```json +curl --location --request DELETE 'https://api.dify.ai/v1/datasets/{dataset_id}' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +204 No Content +``` + +#### 通过文本更新文档 + +此接口基于已存在知识库,在此知识库的基础上通过文本更新文档 + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/update_by_text' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' \ +--data-raw '{"name": "name","text": "text"}' +``` + +输出示例: + +```json +{ + "document": { + "id": "", + "position": 1, + "data_source_type": "upload_file", + "data_source_info": { + "upload_file_id": "" + }, + "dataset_process_rule_id": "", + "name": "name.txt", + "created_from": "api", + "created_by": "", + "created_at": 1695308667, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false, + "display_status": "queuing", + "word_count": 0, + "hit_count": 0, + "doc_form": "text_model" + }, + "batch": "" +} +``` + +#### 通过文件更新文档 + +此接口基于已存在知识库,在此知识库的基础上通过文件更新文档的操作。 + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/update_by_file' \ +--header 'Authorization: Bearer {api_key}' \ +--form 'data="{"name":"Dify","indexing_technique":"high_quality","process_rule":{"rules":{"pre_processing_rules":[{"id":"remove_extra_spaces","enabled":true},{"id":"remove_urls_emails","enabled":true}],"segmentation":{"separator":"###","max_tokens":500}},"mode":"custom"}}";type=text/plain' \ +--form 'file=@"/path/to/file"' +``` + +输出示例: + +```json +{ + "document": { + "id": "", + "position": 1, + "data_source_type": "upload_file", + "data_source_info": { + "upload_file_id": "" + }, + "dataset_process_rule_id": "", + "name": "Dify.txt", + "created_from": "api", + "created_by": "", + "created_at": 1695308667, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false, + "display_status": "queuing", + "word_count": 0, + "hit_count": 0, + "doc_form": "text_model" + }, + "batch": "20230921150427533684" +} +``` + + +#### **获取文档嵌入状态(进度)** + +输入示例: + +```bash +curl --location --request GET 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{batch}/indexing-status' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +{ + "data":[{ + "id": "", + "indexing_status": "indexing", + "processing_started_at": 1681623462.0, + "parsing_completed_at": 1681623462.0, + "cleaning_completed_at": 1681623462.0, + "splitting_completed_at": 1681623462.0, + "completed_at": null, + "paused_at": null, + "error": null, + "stopped_at": null, + "completed_segments": 24, + "total_segments": 100 + }] +} +``` + +#### **删除文档** + +输入示例: + +```bash +curl --location --request DELETE 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```bash +{ + "result": "success" +} +``` + +#### **知识库文档列表** + +输入示例: + +```bash +curl --location --request GET 'https://api.dify.ai/v1/datasets/{dataset_id}/documents' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +{ + "data": [ + { + "id": "", + "position": 1, + "data_source_type": "file_upload", + "data_source_info": null, + "dataset_process_rule_id": null, + "name": "dify", + "created_from": "", + "created_by": "", + "created_at": 1681623639, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false + }, + ], + "has_more": false, + "limit": 20, + "total": 9, + "page": 1 +} +``` + +#### **新增分段** + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/segments' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' \ +--data-raw '{"segments": [{"content": "1","answer": "1","keywords": ["a"]}]}' +``` + +输出示例: + +```json +{ + "data": [{ + "id": "", + "position": 1, + "document_id": "", + "content": "1", + "answer": "1", + "word_count": 25, + "tokens": 0, + "keywords": [ + "a" + ], + "index_node_id": "", + "index_node_hash": "", + "hit_count": 0, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "status": "completed", + "created_by": "", + "created_at": 1695312007, + "indexing_at": 1695312007, + "completed_at": 1695312007, + "error": null, + "stopped_at": null + }], + "doc_form": "text_model" +} + +``` + +### 查询文档分段 + +输入示例: + +```bash +curl --location --request GET 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/segments' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' +``` + +输出示例: + +```bash +{ + "data": [{ + "id": "", + "position": 1, + "document_id": "", + "content": "1", + "answer": "1", + "word_count": 25, + "tokens": 0, + "keywords": [ + "a" + ], + "index_node_id": "", + "index_node_hash": "", + "hit_count": 0, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "status": "completed", + "created_by": "", + "created_at": 1695312007, + "indexing_at": 1695312007, + "completed_at": 1695312007, + "error": null, + "stopped_at": null + }], + "doc_form": "text_model" +} +``` + +### 删除文档分段 + +输入示例: + +```bash +curl --location --request DELETE 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/segments/{segment_id}' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' +``` + +输出示例: + +```bash +{ + "result": "success" +} +``` + +### 更新文档分段 + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/segments/{segment_id}' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json'\ +--data-raw '{"segment": {"content": "1","answer": "1", "keywords": ["a"], "enabled": false}}' +``` + +输出示例: + +```bash +{ + "data": [{ + "id": "", + "position": 1, + "document_id": "", + "content": "1", + "answer": "1", + "word_count": 25, + "tokens": 0, + "keywords": [ + "a" + ], + "index_node_id": "", + "index_node_hash": "", + "hit_count": 0, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "status": "completed", + "created_by": "", + "created_at": 1695312007, + "indexing_at": 1695312007, + "completed_at": 1695312007, + "error": null, + "stopped_at": null + }], + "doc_form": "text_model" +} +``` + + +### 错误信息 + +| 错误信息 | 错误码 | 原因描述 | +|------|--------|---------| +| no_file_uploaded | 400 | 请上传你的文件 | +| too_many_files | 400 | 只允许上传一个文件 | +| file_too_large | 413 | 文件大小超出限制 | +| unsupported_file_type | 415 | 不支持的文件类型。目前只支持以下内容格式:`txt`, `markdown`, `md`, `pdf`, `html`, `html`, `xlsx`, `docx`, `csv` | +| high_quality_dataset_only | 400 | 当前操作仅支持"高质量"知识库 | +| dataset_not_initialized | 400 | 知识库仍在初始化或索引中。请稍候 | +| archived_document_immutable | 403 | 归档文档不可编辑 | +| dataset_name_duplicate | 409 | 知识库名称已存在,请修改你的知识库名称 | +| invalid_action | 400 | 无效操作 | +| document_already_finished | 400 | 文档已处理完成。请刷新页面或查看文档详情 | +| document_indexing | 400 | 文档正在处理中,无法编辑 | +| invalid_metadata | 400 | 元数据内容不正确。请检查并验证 | diff --git a/ja-jp/guides/knowledge-base/connect-external-knowledge-base.mdx b/ja-jp/guides/knowledge-base/connect-external-knowledge-base.mdx new file mode 100644 index 00000000..99ea1ca9 --- /dev/null +++ b/ja-jp/guides/knowledge-base/connect-external-knowledge-base.mdx @@ -0,0 +1,123 @@ +--- +title: 外部ナレッジベースとの接続 +--- + +> この文書では、Difyプラットフォームとは独立したナレッジベースを総称して**「外部ナレッジベース」**と呼びます。 + +## 機能概要 + +高度なコンテンツ検索の要件を持つ上級開発者にとって、Difyプラットフォームに組み込まれたナレッジベース機能とテキスト検索・取得メカニズムには**制約があり、検索結果を簡単に変更することができません。** + +テキスト検索と取得の精度に高い要求を持ち、内部資料の管理ニーズを満たすために、一部のチームは独自にRAGアルゴリズムを開発し、自社のテキスト取得システムを維持したり、コンテンツをクラウドプロバイダのナレッジベースサービス(例:[AWS Bedrock](https://aws.amazon.com/bedrock/))に統合したりしています。 + +中立的なLLMアプリケーション開発プラットフォームであるDifyは、開発者にさまざまな選択肢を提供することを目指しています。 + +**外部ナレッジベースに接続する**機能を使うことで、Difyプラットフォームと外部ナレッジベースを接続できます。APIサービスを通じて、AIアプリケーションはより多くの情報源からデータを取得できるようになります。具体的には: + +- Difyプラットフォームは、クラウドサービスプロバイダのナレッジベースにホスティングされているテキストコンテンツを直接取得でき、開発者はDify内のナレッジベースに再度コンテンツを転送する必要がありません。 +- Difyプラットフォームは、独自に構築したナレッジベース内のアルゴリズムで処理されたテキストコンテンツを直接取得でき、開発者は独自のナレッジベースの情報検索メカニズムに焦点を当て、情報取得の精度を継続的に最適化できます。 + + + + + +以下は外部ナレッジベースに接続するための詳細な手順です: + + + + Difyとの接続を成功させるためには、外部ナレッジベースがDifyが作成した[外部ナレッジベースAPI仕様](../api-connection/external-knowledge-api-documentation)を慎重に読み、APIサービスを構築する必要があります。 + + + > 現在、Difyは外部ナレッジベースに対する最適化や変更をサポートしておらず、検索権限のみを有しています。開発者は外部ナレッジベースを自己管理する必要があります。 + + **"ナレッジベース"** ページに移動し、右上の **"外部ナレッジベースAPI"** をクリックし、 **"外部ナレッジベースAPIを追加"** を選択します。 + + ページの指示に従い、以下の内容を順番に入力してください: + + * ナレッジベースの名前:カスタム名を設定でき、接続された異なる外部ナレッジAPIを区別するために使用します。 + * APIインターフェースアドレス:外部ナレッジベースの接続アドレス(例:`api-endpoint/retrieval`);詳細な説明は[外部ナレッジベースAPI](../api-connection/external-knowledge-api-documentation)を参照してください。 + * APIキー:外部ナレッジベース接続のためのキー;詳細な説明は[外部ナレッジベースAPI](../api-connection/external-knowledge-api-documentation)を参照してください。 + + + + + + + **"ナレッジベース"** ページに移動し、追加したナレッジベースカードの下にある **"外部ナレッジベースに接続"** をクリックして、パラメーター設定ページに移動します。 + + + + + + 以下のパラメータを入力してください: + + - **ナレッジベースの名称と説明** + - **外部ナレッジベースAPI**: 第二ステップで関連付けられた外部ナレッジベースAPIを選択します。DifyはAPI接続を通じて、外部ナレッジベースに保存されているテキスト内容を呼び出します。 + - **外部ナレッジベースID**: 関連付ける特定の外部ナレッジベースIDを指定します。詳細については[外部ナレッジベースAPI](../api-connection/external-knowledge-api-documentation)を参照してください。 + - **召回設定の調整** + + **Top K:** ユーザーが質問を発起した際に、外部知識APIに関連性の高いコンテンツ片を要求します。このパラメータは、ユーザーの質問との類似度が高いテキスト片をフィルタリングするために使用されます。デフォルト値は3で、数値が大きいほど関連性のあるテキスト片が多く召回されます。 + + **スコア閾値:** テキスト片フィルタリングの類似度閾値で、設定されたスコアを超えるテキスト片のみが召回されます。デフォルト値は0.5です。数値が高いほど、テキストと質問要求の類似度が高く、召回されるテキストの数が少なくなり、結果的により精度が高まります。 + + + + + + + + 接続が確立された後、開発者は **「召回テスト」** で可能な質問キーワードをシミュレーションし、外部ナレッジベースから召回されたテキスト片をプレビューできます。召回結果に満足できない場合は、召回パラメータの変更や外部ナレッジベースの検索設定の調整を試みてください。 + + + + + + + - **Chatbot / エージェント型アプリ** + + Chatbot / エージェント型アプリの編成ページにある **「コンテキスト」** で、`EXTERNAL`ラベルの付いた外部ナレッジベースを選択します。 + + + + + + - **チャットフロー / ワークフロー型アプリ** + + チャットフロー / ワークフロー型アプリに **「知識検索」** ノードを追加し、`EXTERNAL`ラベルの付いた外部ナレッジベースを選択します。 + + +
+ + +
+ + **「ナレッジベース」**ページでは、外部ナレッジベースのカードの右上に**EXTERNAL**ラベルが表示されます。変更が必要なナレッジベースに入って、**「設定」**をクリックして以下の内容を変更します: + + * **ナレッジベースの名称と説明** + * **可視範囲**: 「自分だけ」、「全チームメンバー」、「一部のチームメンバー」の3つの権限範囲を提供します。権限のない人はそのナレッジベースにアクセスできません。ナレッジベースを他のメンバーに公開することを選択した場合、他のメンバーもそのナレッジベースの閲覧、編集、および削除権限を持つことになります。 + * **召回設定** + + **Top K:** ユーザーが質問をした際に、外部知識APIに関連性の高いコンテンツ片を要求します。このパラメータは、ユーザーの質問との類似度が高いテキスト片をフィルタリングするために使用されます。デフォルト値は3で、数値が大きいほど関連性のあるテキスト片が多く召回されます。 + + **スコア閾値:** テキスト片フィルタリングの類似度閾値で、設定されたスコアを超えるテキスト片のみが召回されます。デフォルト値は0.5です。数値が高いほど、テキストと質問要求の類似度が高く、召回されるテキストの数が少なくなり、結果的により精度が高まります。 + + 外部ナレッジベースに関連付けられた **「外部ナレッジベースAPI」** と **「外部知識ID」** は変更できません。変更が必要な場合は、新しい「外部ナレッジベースAPI」を関連付けて再接続してください。 + + + + + + +
+ +## よくある質問 + +**外部ナレッジベースAPI接続時に異常が発生し、エラーが表示された場合の対処法は?** + +以下は、返された各エラーコードに対するエラーメッセージと解決策です: + +| エラーコード | エラーメッセージ | 解決策 | +| ---- | --------------------------- | ------------------------------ | +| 1001 | 無効なAuthorization header形式 | リクエストのAuthorization header形式を確認してください | +| 1002 | 認証異常 | 入力したAPIキーが正しいか確認してください | +| 2001 | ナレッジベースが存在しない | 外部ナレッジベースを確認してください | \ No newline at end of file diff --git a/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.mdx b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.mdx new file mode 100644 index 00000000..d7481b93 --- /dev/null +++ b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.mdx @@ -0,0 +1,163 @@ +--- +title: 2. チャンクモードの指定 +--- + +コンテンツをナレッジベースにアップロードした後、次に行うべき作業は、コンテンツの分割とデータのクリーニングです。この段階では、コンテンツの前処理と構造化を行い、長いテキストを複数の小さなブロックに分割します。 + + + + +* 分割 + + 大規模な言語モデルが処理できる情報量には限界があるため、知識データベースのコンテンツを一度にすべて処理することはできません。このため、長い文書をより小さいコンテンツブロックに分割する必要があります。一部のモデルでは、文書全体をアップロードする機能をサポートしていますが、実験により、コンテンツをブロックごとに検索した方が効率的であることが分かっています。 + + 言語モデルが知識データベース内の情報に基づいて正確な回答を提供できるかどうかは、コンテンツブロックの検索と選択の効果に依存します。マニュアルから必要な章を探すように、文書全体を詳細に分析することなく迅速に答えを見つけられます。分割された知識データベースでは、ユーザーの質問に基づいて、関連性が高いコンテンツブロックを選択し、重要な情報を提供することで、回答の精度を向上させます。 + + 質問とコンテンツブロックの意味的なマッチングを行う際、適切な分割サイズが非常に重要です。これにより、モデルが問題に最も関連性が高いコンテンツを正確に特定し、無関係な情報を減らすことができます。分割が大きすぎるか小さすぎると、選択の効果に悪影響を及ぼす可能性があります。 + + Difyは、「汎用分割」と「階層分割」の2種類の分割モードを提供しており、それぞれ異なる文書の構造と用途に適応し、異なる検索と選択の効率と精度の要件を満たします。 + +* クリーニング + + テキストの選択効果を保証するためには、通常、データを知識データベースに入力する前にクリーニングが必要です。例えば、意味のない文字や空行が含まれている可能性があり、これらは応答の品質に影響を与えるため、クリーニングが必要です。Difyでは、自動的なクリーニング戦略が組み込まれており、詳細はETLセクションを参照してください。 + + + +ユーザーが質問した後、LLMがその質問に基づいたナレッジベースからの正確な回答を提供できるかどうかは、関連する情報ブロックを効率的に検索し取り出せるかにかかっています。AIアプリケーションが正確かつ包括的な回答を出すためには、その問題に直接関わる情報ブロックの特定が非常に重要です。 + +例えば、スマートカスタマーサービスの場合、LLMがツールマニュアル内の重要な章の情報ブロックをすぐに見つけ出せれば、ユーザーの問いに素早く答えられます。これにより、文書全体を何度も分析する手間が省けます。結果として、AIアプリケーションの質問応答(Q&A)機能の品質を、トークン使用量を節約しつつ向上させることができます。 + +## ナレッジベースのセグメント分類方法の選択 + +私たちのナレッジベースでは、以下の2つのセグメント分類方法を提供しています。 + +* **汎用分割** + + + 注意:以前の「自動セグメント分割&クリーニング」モードは、自動で汎用分割へとアップデートされました。何も手を加える必要はありません。既定の設定でそのまま利用可能です。 + + +* **親子分割(階層分割)** + + + セグメント分類方法を選んでナレッジベースを作成した後での変更は不可能です。ナレッジベースに新たに追加される文書も、選択した同じセグメント分類方法に従います。 + + +### 汎用分割モード + +汎用分割では、システムはユーザーが設定したルールに沿ってコンテンツを独立したセグメントに分割します。ユーザーが検索クエリを入力すると、システムは自動的にそのクエリのキーワードを分析し、それらのキーワードとナレッジベース内の各コンテンツセグメントとの関連性を評価します。その後、関連性が高いものから順に並べ、最も関連性の高いコンテンツセグメントを選択し、大規模言語モデル(LLM)による処理と回答を行います。 + +このモードでは、異なる文書形式やシナリオの要件に応じて、以下の設定項目を参考にしながら、テキストのセグメント**分割ルール**を手動で調整することが必要です。 + +* **セグメント分割識別子**:デフォルト値は `\n\n` で、文書内の各段落をセグメントに分割します。[正規表現のルール](https://regexr.com/)に従って、分割ルールをカスタマイズできます。例えば、`\n` は各行をセグメントに分割することを意味します。下記の図は、異なる文法を用いたテキスト分割の効果を示しています: + +さまざまなセグメント識別子の構文によるセグメンテーションの影響 + +* **セグメントの最大長さ**:セグメント内のテキスト文字数の最大値を設定します。この長さを超えると、強制的にセグメントが分割されます。デフォルト値は 500 トークンで、セグメント長の最大値は 4000 トークンです。 + +* **セグメントの重複長さ**:データをセグメントに分割する際、セグメント間で一定量の重複が生じます。この重複は情報の損失を防ぎ、分析の精度を向上させ、情報のリコール効率を高めるのに役立ちます。セグメント長の10%から25%を重複させることを推奨します。 + +**テキスト前処理ルール**:ナレッジベース内の不要な内容をフィルタリングするための設定です。 + +* 連続する空白、改行、タブを置換 +* すべてのURLと電子メールアドレスを削除 + +設定完了後、「プレビューブロック」をクリックすることで、セグメント分割後の効果を確認できます。各セグメントの文字数が直感的に理解可能です。 + +複数の文書を一括でアップロードした場合、文書のタイトルをクリックすることで、他の文書のセグメント分割効果を素早く確認できます。 + +汎用分割 + +セグメント分割ルールの設定が完了したら、次にインデックス方式を選択する必要があります。「高品質インデックス」と「経済インデックス」が利用可能で、詳細は[インデックス方法の設定](./setting-indexing-methods)をご覧ください。 + +### 親子分割モード(階層分割モード) + +汎用分割モードと比べると、親子分割モードは、データを二層構造で扱うことで、詳細なマッチングと文脈情報の提供の両方を可能にします。例として、AIを活用したカスタマーサポートでは、このモードを用いてユーザーの質問を解決策のドキュメント内の特定の文へと紐づけ、その文が含まれる段落や章をLLMへと送信します。これにより、質問の背景情報を完全に把握し、より適切な回答を提供することができます。 + +基本的な動作は以下の通りです: + +* サブセグメントマッチングクエリ: + * ドキュメントを小さな情報単位(例えば、一文)に分割し、ユーザーの質問により精密にマッチングします。 + * サブセグメントは、ユーザーのニーズに最も適した初期結果を素早く提供します。 +* メインセグメントによる文脈を提供: + * マッチングしたサブセグメントを含むより大きな単位(段落、章、または文書全体)をメインブロックとして扱い、LLMへと送信します。 + * メインセグメントは、LLMが情報を逃さず、ナレッジベースに基づいた適切な回答を導くための完全な背景情報を提供します。 + +親子分割モード原理 + +このモードでは、文書の形式やシナリオの要求に応じて、手動で階層型セグメンテーションのルールを設定する必要があります。 + +**メインセグメント(親セグメント)**: + +メインセグメントの設定では、以下のオプションを提供します: + +* 段落 + あらかじめ設定された区切り記号ルールと最大ブロック長を基にテキストを段落に分割します。各段落はメインブロックとして扱われ、テキスト量が多く、内容が明確で段落が独立している文書に適しています。以下の設定オプションがあります: + + * **区切り文字**、デフォルトは `\n\n` で、テキストの段落に従って分割します。[正規表現の文法](https://regexr.com/)に従ったカスタムルールを設定でき、テキストに区切り文字が現れたときに自動的に分割します。 + + * **最大分割長**、分割内のテキストの最大文字数を指定し、超えると自動的に分割します。デフォルトは500トークンで、最大4000トークンまで設定可能です。 + +* 全文 + 段落に分けずに、全文を単一のメインブロックとして扱います。パフォーマンスの観点から、テキスト内の最初の10000トークンの文字のみが保持され、テキスト量が少なく、段落間に関連性があり、全文を完全に検索する必要があるシナリオに適しています。 + +親子分割モードでの段落と全文のプレビュー + +**サブセグメント(子セグメント)**: + +サブセグメントのテキストは、メインテキストのセグメントに基づいて、区切り記号ルールに従って分割されます。これは、クエリのキーワードに最も関連し、直接的な情報を検索しマッチングするために使用されます。 + +メインセグメントが段落の場合、サブセグメントはその段落内の個別の文です;メインセグメントが全文の場合、サブセグメントは全文中の各個別の文です。 + +* **区切り文字**、デフォルトは \n で、文に従って分割します。[正規表現の文法](https://regexr.com/)に従ったカスタムルールを設定でき、テキストに区切り文字が現れたときに自動的に分割します。 + +* **最大分割長**、分割内のテキストの最大文字数を指定し、超えると自動的に分割します。デフォルトは200トークンで、最大4000トークンまで設定可能です。 + +設定完了後、「プレビュー」ボタンをクリックすると、分割された結果を確認できます。メインブロック全体の文字数が確認でき、背景が青色で表示された部分がサブブロックであり、現在のサブセグメントの文字数も表示されます。 + +分割ルールを変更した場合は、「プレビュー」ボタンを再度クリックして、新しい内容の分割結果を確認する必要があります。 + +複数の文書を同時にアップロードした場合、ページ上部の文書タイトルをタップして、他の文書へ素早く切り替えて分割結果をプレビューできます。 + +親子分割モード + +コンテンツ検索の精度を確保するため、親子分割モードは[「高品質インデックス」](../create-knowledge-and-upload-documents/chunking-and-cleaning-text#gao-zhi-liang-suo-yin)の使用のみをサポートしています。 + +### 二つのモードの主な違いは何ですか? + +主な違いは、コンテンツをどのように分割するかにあります。汎用モードでは、複数の独立したブロックにコンテンツが分けられますが、親子モードでは二層構造を使ってコンテンツを分割します。つまり、一つの親ブロック(文書全体や段落)が、複数の子ブロック(文)を含む構造になっています。 + +この分割方法の違いが、LLMがナレッジベースを検索する際の効率に大きな影響を与えます。特に、親子検索では、より包括的なコンテキスト情報が提供されるため、精度も向上し、従来の単層の汎用検索方法と比べて格段に優れた性能を発揮します。 + +汎用モードと親子モードの検索効率の比較 + +### もっと読む + +分割モードを選んだら、次にインデックスの設定や検索方法の調整を行い、ナレッジベースの構築を進めましょう。 + +* [インデックス方式](./setting-indexing-methods.md) +* [検索オプションの設定](./selecting-retrieval-settings.md) diff --git a/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme.mdx b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme.mdx new file mode 100644 index 00000000..cdc5e848 --- /dev/null +++ b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/readme.mdx @@ -0,0 +1,38 @@ +--- +title: 1. テキストデータのインポート +--- + +Difyプラットフォームの上部ナビゲーションにある**「ナレッジベース」**→**「ナレッジベースを作成」**をクリックします。ローカルファイルのアップロードやオンラインデータのインポートを通じて、ドキュメントをナレッジベースにアップロードできます。 + +### **ローカルファイルのアップロード** + +ファイルをドラッグ&ドロップするか選択してアップロードします。**バッチアップロードに対応**していますが、一度にアップロードできるファイル数は[サブスクリプションプラン](https://dify.ai/pricing)によって制限されています。 + +**ローカルドキュメントのアップロードには以下の制限があります:** + +* 単一ドキュメントのアップロードサイズは**15MB**に制限されています +* 異なるSaaSバージョンの[サブスクリプションプラン](https://dify.ai/pricing)により**バッチアップロード数、ドキュメント総アップロード数、ベクトルストレージスペース**が制限されています + +ナレッジベースの作成 + +### **オンラインデータのインポート** + +ナレッジベース作成時にオンラインデータからのインポートに対応しています。ナレッジベースでは以下の2種類のオンラインデータインポートをサポートしています: + + + Notionからデータをインポートする方法について学ぶ + + + + ウェブサイトからデータをインポートする方法について学ぶ + + +オンラインデータを参照するナレッジベースには、後からローカルドキュメントを追加することはできません。また、ローカルファイルタイプのナレッジベースに変更することもできません。これは、一つのナレッジベースに複数のデータソースが存在することで管理が困難になるのを防ぐためです。 + +### 後からのインポート + +ドキュメントやその他のコンテンツデータの準備ができていない場合は、まず空のナレッジベースを作成し、後からローカルドキュメントをアップロードするか、オンラインデータをインポートすることができます。 diff --git a/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.mdx b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.mdx new file mode 100644 index 00000000..9d7a246f --- /dev/null +++ b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.mdx @@ -0,0 +1,110 @@ +--- +title: 1.1 Notionデータをインポート +--- + +DifyデータセットはNotionからのインポートをサポートし、**同期**を設定することで、Notionのデータが更新されると自動的にDifyに同期されます。 + +### 認証確認 + +1. データセットを作成し、データソースを選択する際に、**Notion内容から同期-- バインドへ進み、指示に従って認証確認を完了してください。** +2. または、**設定 -- データソース -- データソースを追加**に進み、Notionソースで**バインド**をクリックして認証確認を完了することもできます。 + +Notionをバインド + +### Notionデータのインポート + +認証確認が完了したら、データセット作成ページに進み、**Notion内容から同期**をクリックし、必要な認証ページを選択してインポートします。 + +Notionをインポートする + +### 分割とクリーニングの実施 + +次に、**分割設定**と**インデックス方式**を選択し、**保存して処理**をクリックします。Difyがこれらのデータを処理するのを待ちます。このステップでは、大規模言語モデル(LLM)サプライヤーでトークンが消費される場合があります。Difyは通常のページデータのインポートをサポートするだけでなく、データベースタイプのページ属性もまとめて保存します。 + + +**注意点:画像やファイルのインポートは現在サポートされていません。表データはテキストとして表示されます。** + + +Notionのコンテンツをチャンク化する + +### Notionデータの同期 + +Notionの内容に変更があった場合、Difyデータセットの**文書リストページ**で**同期**をクリックするだけで、データを一括で同期できます。このステップでもトークンが消費されます。 + +Notion内容を同期 + +### コミュニティ版Notionの統合設定方法 + +Notionの統合は、**インターナル統合**(internal integration)と**パブリック統合**(public integration)の2種類があります。Difyで必要に応じて設定できます。2つの統合方法の具体的な違いについては[Notion公式ドキュメント](https://developers.notion.com/docs/authorization)を参照してください。 + +### 1、**インターナル統合方式の利用** + +まず、統合設定ページで[統合を作成](https://www.notion.so/my-integrations)します。デフォルトでは、すべての統合はインターナル統合として開始されます。インターナル統合は選択したワークスペースと関連付けられるため、ワークスペースの所有者である必要があります。 + +具体的な操作手順: + +**New integration**ボタンをクリックし、タイプはデフォルトで**インターナル**(変更不可)です。関連付けるスペースを選択し、統合名を入力しロゴをアップロードした後、**Submit**をクリックして統合を作成します。 + +インターナル統合の作成 + +統合を作成したら、必要に応じてCapabilitiesタブで設定を更新し、Secretsタブで**Show**ボタンをクリックしてSecretsをコピーします。 + +Notion Secret情報 + +コピーした後、Difyのソースコードに戻り、**.env**ファイルに関連する環境変数を設定します。環境変数は以下の通りです: + +**NOTION_INTEGRATION_TYPE** = インターナル または **NOTION_INTEGRATION_TYPE** = パブリック + +**NOTION_INTERNAL_SECRET**=you-internal-secret + +### 2、**パブリック統合方式の利用** + +**インターナル統合をパブリック統合にアップグレードする必要があります**。統合の配布ページに移動し、スイッチを切り替えて統合を公開します。スイッチをパブリック設定に切り替えるには、以下の組織情報フォームに会社名、Webサイト、リダイレクトURLなどの情報を入力し、**Submit**ボタンをクリックします。 + +パブリック統合の設定 + +統合の設定ページで公開に成功すると、密鍵タブで統合の密鍵にアクセスできるようになります: + +パブリック統合のシークレット + +Difyのソースコードに戻り、**.env**ファイルに関連する環境変数を設定します。環境変数は以下の通りです: + +**NOTION_INTEGRATION_TYPE**=パブリック + +**NOTION_CLIENT_SECRET**=you-client-secret + +**NOTION_CLIENT_ID**=you-client-id + +設定が完了したら、データセットでNotionのデータインポートおよび同期機能を操作できます。 \ No newline at end of file diff --git a/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.mdx b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.mdx new file mode 100644 index 00000000..e3105acd --- /dev/null +++ b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.mdx @@ -0,0 +1,93 @@ +--- +title: 1.2 Webサイトからデータをインポート +--- + +Dify のナレッジベースでは、[Jina Reader](https://jina.ai/reader)や[Firecrawl](https://www.firecrawl.dev/)を利用してウェブページをスクレイピングし、解析したデータをMarkdownの形式でナレッジベースに取り込むことができます。 + + +[Jina Reader](https://jina.ai/reader)や[Firecrawl](https://www.firecrawl.dev/)は、オープンソースのウェブページ解析ツールです。ウェブページをクリーンで大規模言語モデル(LLM)が扱いやすいMarkdown形式のテキストに変換します。また、使いやすいAPIサービスも提供しています。 + + +## Firecrawl + +### Firecrawlの認証情報の設定 + +右上隅にあるアバターをクリックし、DataSourceページでFirecrawlの認証情報を設定する必要があります。 + +データソース設定ページ + +[Firecrawl 公式サイト](https://www.firecrawl.dev/) にログインして登録を完了し、APIキーを取得してから入力し、保存します。 + +Firecrawl APIキー設定 + +### Firecrawl を使用してWebコンテンツをクロールする + +ナレッジベース作成のページで**Sync from website**を選択し、スクレイピングの対象どしてのウェブページのURLを入力します。 + +設定項目には、サブページのスクレイピング、スクレイピングするページの上限、ページのスクレイピング深度、ページの除外、指定ページのみのスクレイピング、コンテンツの抽出などが含まれます。設定が完了したら **Run** をクリックし、解析結果のページをプレビューします。 + +Webコンテンツをクロールする + +解析されたテキストをナレッジベースのドキュメントにインポートし、結果を確認します。**Add URL** をクリックすると、新しいウェブページをさらにインポートできます。 + +*** + +## Jina Reader + +### Jina Readerの認証情報の設定 + +右上隅にあるアバターをクリックし、DataSourceページでJina Readerの認証情報を設定する必要があります。 + +データソース設定ページ + +[Jina Readerの公式サイト](https://jina.ai/reader) にログインして登録を完了し、APIキーを取得してから入力し、保存します。 + +Jina Reader APIキー設定 + +### Jina Reader を使用してWebコンテンツをクロールする + +ナレッジベース作成のページで**Sync from website**を選択し、スクレイピングの対象どしてのウェブページのURLを入力します。 + +Jina Readerでのウェブページ入力 + +設定項目には、サブページをクロールするかどうか、クロールされるページ数の上限、サイトマップのクロールを使用するかどうかなどがあります。設定が完了したら **Run** をクリックし、解析結果のページをプレビューします。 + +クロール設定と実行 + +解析されたテキストをナレッジベースのドキュメントにインポートし、結果を確認します。**Add URL** をクリックすると、新しいウェブページをさらにインポートできます。 + +クロール結果のインポート + +クロールが完了すると、Web ページのコンテンツがナレッジ ベースに組み込まれます。 \ No newline at end of file diff --git a/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx new file mode 100644 index 00000000..67c9e6ef --- /dev/null +++ b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx @@ -0,0 +1,57 @@ +--- +title: 知识库创建步骤 +--- + +创建知识库并上传文档大致分为以下步骤: + +1. 创建知识库。通过上传本地文件、导入在线数据或创建一个空的知识库。 + + +选择通过上传本地文件或导入在线数据的方式创建知识库 + + +2. 指定分段模式。该阶段是内容的预处理与数据结构化过程,长文本将会被划分为多个内容分段。你可以在此环节预览文本的分段效果。 + + +了解如何设置文本分段和清洗规则 + + +3. 设定索引方法和检索设置。知识库在接收到用户查询问题后,按照预设的检索方式在已有的文档内查找相关内容,提取出高度相关的信息片段供语言模型生成高质量答案。 + + +了解如何配置检索方式和相关设置 + + +5. 等待分段嵌入 +6. 完成上传,在应用内关联知识库并使用。你可以参考[在应用内集成知识库](../integrate-knowledge-within-application.md),搭建出能够基于知识库进行问答的 LLM 应用。如需修改或管理知识库,请参考[知识库管理与文档维护](../knowledge-and-documents-maintenance/)。 + +![完成知识库的创建](https://assets-docs.dify.ai/2024/12/a3362a1cd384cb2b539c9858de555518.png) + +### 参考阅读 + +#### ETL + +在 RAG 的生产级应用中,为了获得更好的数据召回效果,需要对多源数据进行预处理和清洗,即 ETL (_extract, transform, load_)。为了增强非结构化/半结构化数据的预处理能力,Dify 支持了可选的 ETL 方案:**Dify ETL** 和[ ](https://docs.unstructured.io/welcome)[**Unstructured ETL** ](https://unstructured.io/)。Unstructured 能够高效地提取并转换你的数据为干净的数据用于后续的步骤。Dify 各版本的 ETL 方案选择: + +* SaaS 版不可选,默认使用 Unstructured ETL; +* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; + +文件解析支持格式的差异: + +| DIFY ETL | Unstructured ETL | +| --- | --- | +| txt、markdown、md、pdf、html、htm、xlsx、xls、docx、csv | txt、markdown、md、pdf、html、htm、xlsx、xls、docx、csv、eml、msg、pptx、ppt、xml、epub | + +不同的 ETL 方案在文件提取效果的方面也会存在差异,想了解更多关于 Unstructured ETL 的数据处理方式,请参考[官方文档](https://docs.unstructured.io/open-source/core-functionality/partitioning)。 + +#### **Embedding** + +**Embedding 嵌入**是一种将离散型变量(如单词、句子或者整个文档)转化为连续的向量表示的技术。它可以将高维数据(如单词、短语或图像)映射到低维空间,提供一种紧凑且有效的表示方式。这种表示不仅减少了数据的维度,还保留了重要的语义信息,使得后续的内容检索更加高效。 + +**Embedding 模型**是一种专门用于将文本向量化的大语言模型,它擅长将文本转换为密集的数值向量,有效捕捉语义信息。 + +> 如需了解更多,请参考:[《Dify:Embedding 技术与 Dify 知识库设计/规划》](https://mp.weixin.qq.com/s/vmY_CUmETo2IpEBf1nEGLQ)。 + +#### **元数据** + +如需使用元数据功能管理知识库,请参阅 [元数据](https://docs.dify.ai/zh-hans/guides/knowledge-base/metadata)。 \ No newline at end of file diff --git a/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.mdx b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.mdx new file mode 100644 index 00000000..525b503e --- /dev/null +++ b/ja-jp/guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.mdx @@ -0,0 +1,203 @@ +--- +title: 3. インデックス方法と検索設定を指定 +--- + +コンテンツの分割モードを選択した後、構造化されたコンテンツの**インデックス方法**と**検索設定**を行います。 + +## インデックス方法の設定 + +検索エンジンが効率的なインデックスアルゴリズムを通じてユーザーの質問に最も関連するウェブページコンテンツをマッチングするように、インデックス方法の適切さはLLMがナレッジベース内のコンテンツを検索する効率と回答の正確性に直接影響します。 + +**高品質**と**エコノミー**の2種類のインデックス方法を提供しており、それぞれ異なる検索設定オプションがあります: + + + 元のQ&Aモード(コミュニティ版のみ対応)は、高品質インデックス方法のオプションになりました。 + + + + + **高品質** + + 高品質モードでは、Embeddingモデルを使用して、分割されたテキストブロックを数値ベクトルに変換し、大量のテキスト情報をより効果的に圧縮・保存します。**これによりユーザーの質問とテキスト間のマッチングがより正確になります**。 + + コンテンツブロックをベクトル化してデータベースに登録した後、ユーザーの質問にマッチするコンテンツブロックを効果的に取り出す検索方法が必要です。高品質モードでは、ベクトル検索、全文検索、ハイブリッド検索の3つの検索設定を提供しています。各設定の詳細については、[検索設定](#検索方法の指定)を参照してください。 + + 高品質モードを選択した後、現在のナレッジベースのインデックス方法を後から**「エコノミー」インデックスモード**にダウングレードすることはできません。切り替えが必要な場合は、ナレッジベースを新しく作成し、インデックス方法を再選択することをお勧めします。 + + > Embedding技術とベクトルについての詳細は、[「Embedding技術とDify」](https://mp.weixin.qq.com/s/vmY_CUmETo2IpEBf1nEGLQ)を参照してください。 + + 高品質モード + + **Q\&Aモードの有効化(オプション、[コミュニティ版](../../../getting-started/install-self-hosted/)のみ)** + + このモードを有効にすると、システムはアップロードされたテキストを分割し、各分割のコンテンツを要約して自動的にQ\&Aマッチングペアを生成します。一般的な「Q to P」(ユーザーの質問がテキスト段落にマッチング)戦略とは異なり、QAモードでは「Q to Q」(質問が質問にマッチング)戦略を採用しています。 + + これは「よくある質問」文書内のテキストが**通常、完全な文法構造を持つ自然言語**であるため、Q to Qモードによって質問と回答のマッチングがより明確になり、同時に高頻度で類似度の高い質問のシナリオにも対応できるからです。 + + > **Q\&Aモードは「中国語、英語、日本語」の3言語のみサポートしています。このモードを有効にするとより多くのLLM Tokensを消費する可能性があり、**[**エコノミーインデックス方法**](setting-indexing-methods.md#エコノミー)**は使用できません。** + + Q\&A チャンキング + + ユーザーが質問すると、システムは最も類似した質問を見つけ、対応する分割を回答として返します。この方法はより精密で、ユーザーの質問に直接マッチングするため、ユーザーが本当に必要とする情報をより正確に検索できます。 + + Q to P と Q to Q のインデックスモードの違い + + + **エコノミー** + + エコノミーモードでは、各ブロック内で10個のキーワードを使用して検索し、精度は下がりますが費用は発生しません。検索されたブロックに対しては、逆引きインデックス方式のみで最も関連性の高いブロックを選択します。詳細は[以下](#検索方法の指定)をお読みください。 + + エコノミータイプのインデックス方法を選択した後、実際の効果が良くないと感じる場合は、ナレッジベース設定ページで**「高品質」インデックス方法**にアップグレードできます。 + + エコノミーモード + + + +## 検索方法の指定 + +ナレッジベースはユーザーのクエリを受け取った後、事前設定された検索方法に従って既存の文書内で関連コンテンツを検索し、言語モデルが高品質な回答を生成するために高度に関連する情報の断片を抽出します。これはLLMが取得できる背景情報を決定し、生成結果の正確性と信頼性に影響を与えます。 + +一般的な検索方法には、ベクトル類似度に基づく意味検索と、キーワードに基づく精密マッチングがあります。前者はテキストコンテンツブロックと質問クエリをベクトルに変換し、ベクトル類似度の計算によってより深いレベルの意味的関連性をマッチングします。後者は検索エンジンでよく使われる検索方法である逆引きインデックスを通じて、質問と重要なコンテンツをマッチングします。 + +異なるインデックス方法には異なる検索設定があります。 + + + + **高品質インデックス** + + 高品質インデックス方法では、Difyはベクトル検索、全文検索、ハイブリッド検索の設定を提供しています: + + 検索設定 + + **ベクトル検索** + + **定義:** ユーザーが入力した質問をベクトル化し、クエリテキストの数学ベクトルを生成し、クエリベクトルとナレッジベース内の対応するテキストベクトル間の距離を比較し、隣接する分割コンテンツを探します。 + + ベクトル検索 + + **ベクトル検索設定:** + + **Rerankモデル:** デフォルトでは無効です。有効にすると、ベクトル検索によって呼び出されたコンテンツセグメントを第三者のRerankモデルを使用して再度並べ替え、並べ替え結果を最適化します。LLMがより正確なコンテンツを取得し、出力の品質を向上させるのを助けます。このオプションを有効にする前に、「設定」→「モデルプロバイダー」に移動し、RerankモデルのAPIキーを事前に設定する必要があります。 + + > この機能を有効にすると、Rerankモデルのトークンが消費されます。詳細については、対応するモデルの価格説明を参照してください。 + + **TopK:** ユーザーの質問との類似度が最も高いテキスト断片をフィルタリングするために使用されます。システムは同時に使用するモデルのコンテキストウィンドウサイズに基づいて断片の数を動的に調整します。デフォルト値は3です。値が高いほど、呼び出されるテキストセグメントの予想数が多くなります。 + + **Scoreしきい値:** テキスト断片をフィルタリングする類似度のしきい値を設定するために使用され、設定されたスコアを超えるテキスト断片のみを呼び出します。デフォルト値は0.5です。値が高いほど、テキストと質問の類似度の要求が高くなり、呼び出されるテキストの予想数も少なくなります。 + + > TopKとScore設定はRerankステップでのみ有効であるため、RerankモデルをRerankモデルを追加して有効にする必要があります。 + + *** + + **全文検索** + + **定義:** キーワード検索、つまり文書内のすべての語彙のインデックス作成です。ユーザーが質問を入力した後、明示的なキーワードによってナレッジベース内の対応するテキスト断片をマッチングし、キーワードに合致するテキスト断片を返します。検索エンジンの明示的な検索と類似しています。 + + 全文検索 + + **Rerankモデル:** デフォルトでは無効です。有効にすると、全文検索によって呼び出されたコンテンツセグメントを第三者のRerankモデルを使用して再度並べ替え、並べ替え結果を最適化します。LLMに再並べ替えされたセグメントを送信し、出力コンテンツの品質を向上させます。このオプションを有効にする前に、「設定」→「モデルプロバイダー」に移動し、RerankモデルのAPIキーを事前に設定する必要があります。 + + > この機能を有効にすると、Rerankモデルのトークンが消費されます。詳細については、対応するモデルの価格説明を参照してください。 + + **TopK:** ユーザーの質問との類似度が最も高いテキスト断片をフィルタリングするために使用されます。システムは同時に使用するモデルのコンテキストウィンドウサイズに基づいて断片の数を動的に調整します。システムのデフォルト値は3です。値が高いほど、呼び出されるテキストセグメントの予想数が多くなります。 + + **Scoreしきい値:** テキスト断片をフィルタリングする類似度のしきい値を設定するために使用され、設定されたスコアを超えるテキスト断片のみを呼び出します。デフォルト値は0.5です。値が高いほど、テキストと質問の類似度の要求が高くなり、呼び出されるテキストの予想数も少なくなります。 + + > TopKとScore設定はRerankステップでのみ有効であるため、RerankモデルをRerankモデルを追加して有効にする必要があります。 + + *** + + **ハイブリッド検索** + + **定義:** 全文検索とベクトル検索、またはRerankモデルを同時に実行し、クエリ結果からユーザーの質問に最もマッチする最良の結果を選択します。 + + ハイブリッド検索 + + ハイブリッド検索設定では、**「重み付け設定」**または**「Rerankモデル」**の有効化を選択できます。 + + * **重み付け設定** + + ユーザーが意味的優先度とキーワード優先度にカスタム重みを付けることができます。キーワード検索はナレッジベース内での全文検索(Full Text Search)を指し、意味検索はナレッジベース内でのベクトル検索(Vector Search)を指します。 + + * **意味値を1にする** + + **意味検索モードのみを有効にします**。Embeddingモデルを活用することで、クエリに含まれる正確な単語がナレッジベースになくても、ベクトル距離を計算することで検索の深度を高め、正確なコンテンツを返すことができます。また、多言語コンテンツを処理する必要がある場合、意味検索は異なる言語間の意味変換をキャプチャし、より正確なクロス言語検索結果を提供できます。 + * **キーワード値を1にする** + + **キーワード検索モードのみを有効にします**。ユーザーが入力した情報テキストをナレッジベース全体でマッチングし、ユーザーが正確な情報や用語を知っているシナリオに適しています。この方法は消費する計算リソースが比較的少なく、大量の文書を含むナレッジベース内での迅速な検索に適しています。 + * **キーワードと意味の重みをカスタマイズする** + + 異なる値を1に引き上げるだけでなく、両者の重みを継続的に調整して、ビジネスシナリオに合った最適な重み比率を見つけることができます。 + + > 意味検索とは、ユーザーの質問とナレッジベースコンテンツ内のベクトル間の距離を比較することを指します。距離が近いほど、マッチングの確率が高くなります。参考文献:[「Dify:Embedding技術とDifyナレッジベースの設計/計画」](https://mp.weixin.qq.com/s/vmY_CUmETo2IpEBf1nEGLQ)。 + + *** + + * **Rerankモデル** + + デフォルトでは無効です。有効にすると、ハイブリッド検索によって呼び出されたコンテンツセグメントを第三者のRerankモデルを使用して再度並べ替え、並べ替え結果を最適化します。LLMに再並べ替えされたセグメントを送信し、出力コンテンツの品質を向上させます。このオプションを有効にする前に、「設定」→「モデルプロバイダー」に移動し、RerankモデルのAPIキーを事前に設定する必要があります。 + + > この機能を有効にすると、Rerankモデルのトークンが消費されます。詳細については、対応するモデルの価格説明を参照してください。 + + **「重み付け設定」**と**「Rerankモデル」**設定では、以下のオプションを有効にすることができます: + + **TopK:** ユーザーの質問との類似度が最も高いテキスト断片をフィルタリングするために使用されます。システムは同時に使用するモデルのコンテキストウィンドウサイズに基づいて断片の数を動的に調整します。システムのデフォルト値は3です。値が高いほど、呼び出されるテキストセグメントの予想数が多くなります。 + + **Scoreしきい値:** テキスト断片をフィルタリングする類似度のしきい値を設定するために使用されます。つまり、設定されたスコアを超えるテキスト断片のみを呼び出します。システムはデフォルトでこの設定を無効にしています。つまり、呼び出されたテキスト断片の類似値をフィルタリングしません。有効にするとデフォルト値は0.5です。値が高いほど、呼び出されるテキストの予想数は少なくなります。 + + + **逆引きインデックス** + + エコノミーインデックス方法では、**逆引きインデックス方式**のみが提供されます。これは文書内のキーワードを迅速に検索するためのインデックス構造で、オンライン検索エンジンでよく使用されています。逆引きインデックスは**TopK**設定項目のみをサポートしています。 + + **TopK:** + + ユーザーの質問との類似度が最も高いテキスト断片をフィルタリングするために使用されます。システムは同時に使用するモデルのコンテキストウィンドウサイズに基づいて断片の数を動的に調整します。システムのデフォルト値は3です。値が高いほど、呼び出されるテキストセグメントの予想数が多くなります。 + + 逆引きインデックス + + + +## もっと読む + +検索設定を指定した後、以下のドキュメントを参照して、実際のシナリオでのキーワードとコンテンツブロックのマッチング状況を確認できます。 + + + 実際のシナリオでのキーワードとコンテンツブロックのマッチング状況を確認する + diff --git a/ja-jp/guides/knowledge-base/faq.mdx b/ja-jp/guides/knowledge-base/faq.mdx new file mode 100644 index 00000000..312df64e --- /dev/null +++ b/ja-jp/guides/knowledge-base/faq.mdx @@ -0,0 +1,20 @@ +--- +title: 常见问题 +version: '简体中文' +--- + +## 1. 导入文档与查询响应速度缓慢,如何排查原因? + +通常情况下,上传文档后的 Embedding 过程将耗费大量资源,有可能造成速度缓慢。请检查服务器负载,将日志切换为 debug 模式或检查 Embedding 的返回时间。 + +## 2. 上传至知识库的文档内容较多,分段后内容异常,如何处理? + +请检查服务器负载的内存占用情况,查看是否出现内存泄漏问题。 + +## 3. 知识库的文件处理一直显示为“排队中”,如何处理? + +此问题有可能是由于与 redis 服务的连接中断,导致任务无法退出。建议重启 Worker 节点。 + +## 4. 如何优化应用对知识库内容的检索结果? + +你可以通过调整检索设置,比较不同的参数进行优化。具体操作请参考[检索设置](/zh-cn/user-guide/knowledge-base/knowledge-base-creation/upload-documents#3)。 diff --git a/ja-jp/guides/knowledge-base/indexing-and-retrieval/hybrid-search.mdx b/ja-jp/guides/knowledge-base/indexing-and-retrieval/hybrid-search.mdx new file mode 100644 index 00000000..c0553edf --- /dev/null +++ b/ja-jp/guides/knowledge-base/indexing-and-retrieval/hybrid-search.mdx @@ -0,0 +1,103 @@ +--- +title: 混合检索 +version: '简体中文' +--- + +### 为什么需要混合检索? + +RAG 检索环节中的主流方法是向量检索,即语义相关度匹配的方式。技术原理是通过将外部知识库的文档先拆分为语义完整的段落或句子,并将其转换(Embedding)为计算机能够理解的一串数字表达(多维向量),同时对用户问题进行同样的转换操作。 + +计算机能够发现用户问题与句子之间细微的语义相关性,比如 “猫追逐老鼠” 和 “小猫捕猎老鼠” 的语义相关度会高于 “猫追逐老鼠” 和 “我喜欢吃火腿” 之间的相关度。在将相关度最高的文本内容查找到后,RAG 系统会将其作为用户问题的上下文一起提供给大模型,帮助大模型回答问题。 + +除了能够实现复杂语义的文本查找,向量检索还有其他的优势: + +* 相近语义理解(如老鼠/捕鼠器/奶酪,谷歌/必应/搜索引擎) +* 多语言理解(跨语言理解,如输入中文匹配英文) +* 多模态理解(支持文本、图像、音视频等的相似匹配) +* 容错性(处理拼写错误、模糊的描述) + +虽然向量检索在以上情景中具有明显优势,但有某些情况效果不佳。比如: + +* 搜索一个人或物体的名字(例如,伊隆·马斯克,iPhone 15) +* 搜索缩写词或短语(例如,RAG,RLHF) +* 搜索 ID(例如, `gpt-3.5-turbo` , `titan-xlarge-v1.01` ) + +而上面这些的缺点恰恰都是传统关键词搜索的优势所在,传统关键词搜索擅长: + +* 精确匹配(如产品名称、姓名、产品编号) +* 少量字符的匹配(通过少量字符进行向量检索时效果非常不好,但很多用户恰恰习惯只输入几个关键词) +* 倾向低频词汇的匹配(低频词汇往往承载了语言中的重要意义,比如“你想跟我去喝咖啡吗?”这句话中的分词,“喝”“咖啡”会比“你”“想”“吗”在句子中承载更重要的含义) + +对于大多数文本搜索的情景,首要的是确保潜在最相关结果能够出现在候选结果中。向量检索和关键词检索在检索领域各有其优势。混合搜索正是结合了这两种搜索技术的优点,同时弥补了两方的缺点。 + +在混合检索中,你需要在数据库中提前建立向量索引和关键词索引,在用户问题输入时,分别通过两种检索器在文档中检索出最相关的文本。 + + + + + +“混合检索”实际上并没有明确的定义,本文以向量检索和关键词检索的组合为示例。如果我们使用其他搜索算法的组合,也可以被称为“混合检索”。比如,我们可以将用于检索实体关系的知识图谱技术与向量检索技术结合。 + +不同的检索系统各自擅长寻找文本(段落、语句、词汇)之间不同的细微联系,这包括了精确关系、语义关系、主题关系、结构关系、实体关系、时间关系、事件关系等。可以说没有任何一种检索模式能够适用全部的情景。**混合检索通过多个检索系统的组合,实现了多个检索技术之间的互补。** + +### **向量检索** + +定义:通过生成查询嵌入并查询与其向量表示最相似的文本分段。 + + + + + +**TopK:** 用于筛选与用户问题相似度最高的文本片段。系统同时会根据选用模型上下文窗口大小动态调整片段数量。系统默认值为 3 。 + +**Score 阈值:** 用于设置文本片段筛选的相似度阈值,即:只召回超过设置分数的文本片段。系统默认关闭该设置,即不会对召回的文本片段相似值过滤。打开后默认值为 0.5 。 + +**Rerank 模型:** 你可以在“模型供应商”页面配置 Rerank 模型的 API 秘钥之后,在检索设置中打开“Rerank 模型”,系统会在语义检索后对已召回的文档结果再一次进行语义重排序,优化排序结果。设置 Rerank 模型后,TopK 和 Score 阈值设置仅在 Rerank 步骤生效。 + +### **全文检索** + +定义:索引文档中的所有词汇,从而允许用户查询任意词汇,并返回包含这些词汇的文本片段。 + + + + + +**TopK:** 用于筛选与用户问题相似度最高的文本片段。系统同时会根据选用模型上下文窗口大小动态调整片段数量。系统默认值为 3 。 + +**Rerank 模型:** 你可以在“模型供应商”页面配置 Rerank 模型的 API 秘钥之后,在检索设置中打开“Rerank 模型”,系统会在全文检索后对已召回的文档结果再一次进行语义重排序,优化排序结果。设置 Rerank 模型后,TopK 和 Score 阈值设置仅在 Rerank 步骤生效。 + +### **混合检索** + +同时执行全文检索和向量检索,并应用重排序步骤,从两类查询结果中选择匹配用户问题的最佳结果,需配置 Rerank 模型 API。 + + + + + +**TopK:** 用于筛选与用户问题相似度最高的文本片段。系统同时会根据选用模型上下文窗口大小动态调整片段数量。系统默认值为 3 。 + +**Rerank 模型:** 你可以在“模型供应商”页面配置 Rerank 模型的 API 秘钥之后,在检索设置中打开“Rerank 模型”,系统会在混合检索后对已召回的文档结果再一次进行语义重排序,优化排序结果。设置 Rerank 模型后,TopK 和 Score 阈值设置仅在 Rerank 步骤生效。 + +### 创建数据集时设置检索模式 + +进入“数据集->创建数据集”页面并在检索设置中设置不同的检索模式: + + + + + +### 数据集设置中修改检索模式 + +进入“数据集->选择数据集->设置”页面中可以对已创建的数据集修改不同的检索模式。 + + + + + +### 提示词编排中修改检索模式 + +进入“提示词编排->上下文->选择数据集->设置”页面中可以在创建应用时修改不同的检索模式。 + + + + diff --git a/ja-jp/guides/knowledge-base/indexing-and-retrieval/rerank.mdx b/ja-jp/guides/knowledge-base/indexing-and-retrieval/rerank.mdx new file mode 100644 index 00000000..0f0bab5a --- /dev/null +++ b/ja-jp/guides/knowledge-base/indexing-and-retrieval/rerank.mdx @@ -0,0 +1,64 @@ +--- +title: 重排序 +version: '简体中文' +--- + +### 为什么需要重排序? + +混合检索能够结合不同检索技术的优势获得更好的召回结果,但在不同检索模式下的查询结果需要进行合并和归一化(将数据转换为统一的标准范围或分布,以便更好地进行比较、分析和处理),然后再一起提供给大模型。这时候我们需要引入一个评分系统:重排序模型(Rerank Model)。 + +**重排序模型会计算候选文档列表与用户问题的语义匹配度,根据语义匹配度重新进行排序,从而改进语义排序的结果**。其原理是计算用户问题与给定的每个候选文档之间的相关性分数,并返回按相关性从高到低排序的文档列表。常见的 Rerank 模型如:Cohere rerank、bge-reranker 等。 + + + + + +在大多数情况下,在重排序之前会有一次前置检索,这是由于计算查询与数百万个文档之间的相关性得分将会非常低效。所以,**重排序一般都放在搜索流程的最后阶段,非常适合用于合并和排序来自不同检索系统的结果**。 + +不过,重排序并不是只适用于不同检索系统的结果合并,即使是在单一检索模式下,引入重排序步骤也能有效帮助改进文档的召回效果,比如我们可以在关键词检索之后加入语义重排序。 + +在具体实践过程中,除了将多路查询结果进行归一化之外,在将相关的文本分段交给大模型之前,我们一般会限制传递给大模型的分段个数(即 TopK,可以在重排序模型参数中设置),这样做的原因是大模型的输入窗口存在大小限制(一般为 4K、8K、16K、128K 的 Token 数量),你需要根据选用的模型输入窗口的大小限制,选择合适的分段策略和 TopK 值。 + +需要注意的是,即使模型上下文窗口很足够大,过多的召回分段会可能会引入相关度较低的内容,导致回答的质量降低,所以重排序的 TopK 参数并不是越大越好。 + +重排序并不是搜索技术的替代品,而是一种用于增强现有检索系统的辅助工具。**它最大的优势是不仅提供了一种简单且低复杂度的方法来改善搜索结果,允许用户将语义相关性纳入现有的搜索系统中,而且无需进行重大的基础设施修改。** + +以 Cohere Rerank 为例,你只需要注册账户和申请 API ,接入只需要两行代码。另外,他们也提供了多语言模型,也就是说你可以将不同语言的文本查询结果进行一次性排序。 + +### 如何配置 Rerank 模型? + +Dify 目前已支持 Cohere Rerank 模型,进入“模型供应商-> Cohere”页面填入 Rerank 模型的 API 秘钥: + + + + + +### + +### 如何获取 Cohere Rerank 模型? + +登录:[https://cohere.com/rerank](https://cohere.com/rerank),在页内注册并申请 Rerank 模型的使用资格,获取 API 秘钥。 + +### + +### 数据集检索模式中设置 Rerank 模型 + +进入“数据集->创建数据集->检索设置”页面并在添加 Rerank 设置。除了在创建数据集可以设置 Rerank ,你也可以在已创建的数据集设置内更改 Rerank 配置,在应用编排的数据集召回模式设置中更改 Rerank 配置。 + + + + + +**TopK:** 用于设置 Rerank 后返回相关文档的数量。 + +**Score 阈值:** 用于设置 Rerank 后返回相关文档的最低分值。设置 Rerank 模型后,TopK 和 Score 阈值设置仅在 Rerank 步骤生效。 + +### 数据集多路召回模式中设置 Rerank 模型 + +进入“提示词编排->上下文->设置”页面中设置为多路召回模式时需开启 Rerank 模型。 + +查看更多关于多路召回模式的说明,[《多路召回》](/zh-cn/user-guide/knowledge-base/integrate-knowledge-within-application)。 + + + + diff --git a/ja-jp/guides/knowledge-base/indexing-and-retrieval/retrieval-augment.mdx b/ja-jp/guides/knowledge-base/indexing-and-retrieval/retrieval-augment.mdx new file mode 100644 index 00000000..7c503941 --- /dev/null +++ b/ja-jp/guides/knowledge-base/indexing-and-retrieval/retrieval-augment.mdx @@ -0,0 +1,26 @@ +--- +title: 检索增强生成(RAG) +version: '简体中文' +--- + +### RAG 的概念解释 + +向量检索为核心的 RAG 架构已成为解决大模型获取最新外部知识,同时解决其生成幻觉问题时的主流技术框架,并且已在相当多的应用场景中落地实践。 + +开发者可以利用该技术低成本地构建一个 AI 智能客服、企业智能知识库、AI 搜索引擎等,通过自然语言输入与各类知识组织形式进行对话。以一个有代表性的 RAG 应用为例: + +在下图中,当用户提问时 “美国总统是谁?” 时,系统并不是将问题直接交给大模型来回答,而是先将用户问题在知识库中(如下图中的维基百科)进行向量搜索,通过语义相似度匹配的方式查询到相关的内容(拜登是美国现任第46届总统…),然后再将用户问题和搜索到的相关知识提供给大模型,使得大模型获得足够完备的知识来回答问题,以此获得更可靠的问答结果。 + + + + + +**为什么需要这样做呢?** + +我们可以把大模型比做是一个超级专家,他熟悉人类各个领域的知识,但他也有自己的局限性,比如他不知道你个人的一些状况,因为这些信息是你私人的,不会在互联网上公开,所以他没有提前学习的机会。 + +当你想雇佣这个超级专家来充当你的家庭财务顾问时,需要允许他在接受你的提问时先翻看一下你的投资理财记录、家庭消费支出等数据。这样他才能根据你个人的实际情况提供专业的建议。 + +**这就是 RAG 系统所做的事情:帮助大模型临时性地获得他所不具备的外部知识,允许它在回答问题之前先找答案。** + +根据上面这个例子,我们很容易发现 RAG 系统中最核心是外部知识的检索环节。专家能不能向你提供专业的家庭财务建议,取决于能不能精确找到他需要的信息,如果他找到的不是投资理财记录,而是家庭减肥计划,那再厉害的专家都会无能为力。 diff --git a/ja-jp/guides/knowledge-base/indexing-and-retrieval/retrieval.mdx b/ja-jp/guides/knowledge-base/indexing-and-retrieval/retrieval.mdx new file mode 100644 index 00000000..97e68f5a --- /dev/null +++ b/ja-jp/guides/knowledge-base/indexing-and-retrieval/retrieval.mdx @@ -0,0 +1,24 @@ +--- +title: 召回模式 +version: '简体中文' +--- + +当用户构建知识库问答类的 AI 应用时,如果在应用内关联了多个知识库,此时需要应用 Dify 的召回策略决定从哪些知识库中检索内容。 + + + + + +### 召回设置 + +根据用户意图同时匹配所有知识库,从多路知识库查询相关文本片段,经过重排序步骤,从多路查询结果中选择匹配用户问题的最佳结果,需配置 Rerank 模型 API。在多路召回模式下,检索器会在所有与应用关联的知识库中去检索与用户问题相关的文本内容,并将多路召回的相关文档结果合并,并通过 Rerank 模型对检索召回的文档进行语义重排序。 + +在多路召回模式下,建议配置 Rerank 模型。你可以阅读 [重排序](/zh-cn/user-guide/knowledge-base/indexing-and-retrieval/rerank) 了解更多。 + +以下是多路召回模式的技术流程图: + + + + + +由于多路召回模式不依赖于模型的推理能力或知识库描述,该模式在多知识库检索时能够获得质量更高的召回效果,除此之外加入 Rerank 步骤也能有效改进文档召回效果。因此,当创建的知识库问答应用关联了多个知识库时,我们更推荐将召回模式配置为多路召回。 diff --git a/ja-jp/guides/knowledge-base/integrate-knowledge-within-application.mdx b/ja-jp/guides/knowledge-base/integrate-knowledge-within-application.mdx new file mode 100644 index 00000000..9b21336f --- /dev/null +++ b/ja-jp/guides/knowledge-base/integrate-knowledge-within-application.mdx @@ -0,0 +1,204 @@ +--- +title: 在应用内集成知识库 +version: '简体中文' +--- + +知识库可以作为外部知识提供给大语言模型用于精确回复用户问题,你可以在 Dify 的[所有应用类型](https://docs.dify.ai/zh-hans/guides/application-orchestrate#application_type)内关联已创建的知识库。 + +以聊天助手为例,使用流程如下: + +1. 进入 **工作室 -- 创建应用 --创建聊天助手** +2. 进入 **上下文设置** 点击 **添加** 选择已创建的知识库 +3. 在 **上下文设置 -- 参数设置** 内配置**召回策略** +4. 在 **元数据筛选** 板块中配置元数据的筛选条件,使用元数据功能筛选知识库内的文档 +5. 在 **添加功能** 内打开 **引用和归属** +6. 在 **调试与预览** 内输入与知识库相关的用户问题进行调试 +7. 调试完成之后**保存并发布**为一个 AI 知识库问答类应用 + +*** + +### 关联知识库并指定召回模式 + +如果当前应用的上下文涉及多个知识库,需要设置召回模式以使得检索的内容更加精确。进入 **上下文 -- 参数设置 -- 召回设置**。 + +#### 召回设置 + +检索器会在所有与应用关联的知识库中去检索与用户问题相关的文本内容,并将多路召回的相关文档结果合并,以下是召回策略的技术流程图: + +![](https://assets-docs.dify.ai/2025/03/745b9de1cdd9465bfbad2ddc5f27bd12.png) + +根据用户意图同时检索所有添加至 **“上下文”** 的知识库,在多个知识库内查询相关文本片段,选择所有和用户问题相匹配的内容,最后通过 Rerank 策略找到最适合的内容并回答用户。该方法的检索原理更为科学。 + +![](https://assets-docs.dify.ai/2024/12/3e0c9be17f054d211c4385ab74d47dfb.png) + +举例:A 应用的上下文关联了 K1、K2、K3 三个知识库,当用户输入问题后,将在三个知识库内检索并汇总多条内容。为确保能找到最匹配的内容,需要通过 Rerank 策略确定与用户问题最相关的内容,确保结果更加精准与可信。 + +在实际问答场景中,每个知识库的内容来源和检索方式可能都有所差异。针对检索返回的多条混合内容,Rerank 策略是一个更加科学的内容排序机制。它可以帮助确认候选内容列表与用户问题的匹配度,改进多个知识间排序的结果以找到最匹配的内容,提高回答质量和用户体验。 + +考虑到 Rerank 的使用成本和业务需求,多路召回模式提供了以下两种 Rerank 设置: + +**权重设置** + +该设置无需配置外部 Rerank 模型,重排序内容**无需额外花费**。可以通过调整语义或关键词的权重比例条,选择最适合的内容匹配策略。 + +* **语义值为 1** + + 仅启用语义检索模式。借助 Embedding 模型,即便知识库中没有出现查询中的确切词汇,也能通过计算向量距离的方式提高搜索的深度,返回正确内容。此外,当需要处理多语言内容时,语义检索能够捕捉不同语言之间的意义转换,提供更加准确的跨语言搜索结果。 + + > 语义检索指的是比对用户问题与知识库内容中的向量距离。距离越近,匹配的概率越大。参考阅读:[《Dify:Embedding 技术与 Dify 数据集设计/规划》](https://mp.weixin.qq.com/s/vmY\_CUmETo2IpEBf1nEGLQ)。 +* **关键词值为 1** + + 仅启用关键词检索模式。通过用户输入的信息文本在知识库全文匹配,适用于用户知道确切的信息或术语的场景。该方法所消耗的计算资源较低,适合在大量文档的知识库内快速检索。 +* **自定义关键词和语义权重** + + 除了仅启用语义检索或关键词检索模式,我们还提供了灵活的自定义权重设置。你可以通过不断调试二者的权重,找到符合业务场景的最佳权重比例。 + +**Rerank 模型** + +Rerank 模型是一种外部评分系统,它会计算用户问题与给定的每个候选文档之间的相关性分数,从而改进语义排序的结果,并按相关性返回从高到低排序的文档列表。 + +虽然此方法会产生一定的额外花费,但是更加擅长处理知识库内容来源复杂的情况,例如混合了语义查询和关键词匹配的内容,或返回内容存在多语言的情况。 + +Dify 目前支持多个 Rerank 模型,进入 “模型供应商” 页填入 Rerank 模型(例如 Cohere、Jina AI 等模型)的 API Key。 + + + + + +**可调参数** + +* **TopK** + + 用于筛选与用户问题相似度最高的文本片段。系统同时会根据选用模型上下文窗口大小,动态调整分段数量。数值越高,预期被召回的文本分段数量越多。 +* **Score 阈值** + + 用于设置文本片段筛选的相似度阈值。向量检索的相似度分数需要超过设置的分数后才会被召回,数值越高,预期被召回的文本数量越少。 + +### 使用元数据筛选知识 + +#### 聊天流/工作流 + +在 **聊天流/工作流** 的 **知识检索** 节点中,你可以使用 **元数据筛选** 功能精确检索文档。该功能有助于你根据文档的元数据字段(如标签、类别或访问权限)优化检索结果。 + +##### 配置步骤 + +1. 选择筛选模式 + + - **禁用模式**(默认):禁用 **元数据筛选** 功能,不配置任何筛选条件。 + + - **自动模式**:系统会根据传输给该 **知识检索** 节点的 **查询变量** 自动配置筛选条件,适用于简单的筛选需求。 + + > 启用自动模式后,你依然需要在 **模型** 栏中选择合适的大模型以执行文档检索任务。 + + ![model_selection](https://assets-docs.dify.ai/2025/03/fe387793ad9923660f9f9470aacff01b.png) + + - **手动模式**:用户可以手动配置筛选条件,自由设置筛选规则,适用于复杂的筛选需求。 + +![](https://assets-docs.dify.ai/2025/03/ec6329e265e035e3a0d6941c9313a19d.png) + +2. 如果你选择了 **手动模式**,请参照以下步骤配置筛选条件: + + 1. 点击 **条件** 按钮,弹出配置框。 + + ![conditions](https://assets-docs.dify.ai/2025/03/cd80d150f6f5646350b7ac8dfee46429.png) + + 2. 点击配置框中的 **+添加条件** 按钮: + + - 可以从下拉列表中选择一个已选中知识库内的元数据字段,添加到筛选条件列表中。 + + > 如果你同时选择了多个知识库,下拉列表只会显示这些知识库共有的元数据字段。 + + - 可以在 **搜索元数据** 搜索框中搜索你需要的字段,添加到筛选条件列表中。 + + ![add_condition](https://assets-docs.dify.ai/2025/03/72678c4174f753f306378b748fbe6635.png) + + 3. 如果需要添加多条字段,可以重复点击 **+添加条件** 按钮。 + + ![add_more_fields](https://assets-docs.dify.ai/2025/03/aeb518c40aabdf467c9d2c23016d0a16.png) + + 4. 配置字段类型的筛选条件: + + | 字段类型 | 筛选条件 | 筛选条件说明与示例 | + | --- | --- | --- | + | 字符串 | is | 字段的值必须与你输入的值完全匹配。例如,如果你设置筛选条件为 `is "Published"`,则只会返回标记为 "Published" 的文档。 | + | | is not | 字段的值不能与你输入的值匹配。例如,如果你设置筛选条件为 `is not "Draft"`,则会返回所有未标记为 "Draft" 的文档。 | + | | is empty | 字段的值为空。如果你配置了此条件,可以检索到未标记该字符串的文档。 | + | | is not empty | 字段的值不为空。如果你配置了此条件,可以检索到标记了该字符串的文档。 | + | | contains | 字段的值包含你输入的文本。例如,如果你设置筛选条件为 `contains "Report"`,则会返回所有包含"Report"的文档,如"Monthly Report" 或 "Annual Report"。 | + | | not contains | 字段的值不包含你输入的文本。例如,如果你设置筛选条件为 `not contains "Draft"`,则会返回所有不包含 "Draft" 的文档。 | + | | starts with | 字段的值以你输入的文本开头。例如,如果你设置筛选条件为 `starts with "Doc"`,则会返回所有以"Doc"开头的文档,如 "Doc1"、"Document"等。 | + | | ends with | 字段的值以你输入的文本结尾。例如,如果你设置筛选条件为 `ends with "2024"`,则会返回所有以"2024"结尾的文档,如"Report 2024"、"Summary 2024"等。 | + | 数字 | = | 字段的值必须等于你输入的数字。例如,`= 10` 会匹配所有数字标记为 10 的文档。 | + | | ≠ | 字段的值不能等于你输入的数字。例如,`≠ 5` 会返回所有数字未标记为 5 的文档。 | + | | > | 字段的值必须大于你输入的数字。例如,`100` 会返回所有数字标记为大于 100 的文档。 | + | | < | 字段的值必须小于你输入的数字。例如,`< 50` 会返回所有数字标记为小于 50 的文档。 | + | | ≥ | 字段的值必须大于或等于你输入的数字。例如,`≥ 20` 会返回所有数字标记为大于或等于 20 的文档。 | + | | ≤ | 字段的值必须小于或等于你输入的数字。例如,`≤ 200` 会返回所有数字标记为小于或等于 200 的文档。 | + | | is empty | 字段未设置值。例如,`is empty` 会返回所有该字段未标记数字的文档。 | + | | is not empty | 字段已设置值。例如,`is not empty` 会返回所有该字段已标记数字的文档。 | + | 时间 | is | 字段的时间值必须与你选择的时间完全匹配。例如,`is "2024-01-01"` 只会返回标记为 2024 年 1 月 1 日的文档。 | + | | before | 字段的时间值必须早于你选择的时间。例如,`before "2024-01-01"` 会返回所有标记为 2024 年 1 月 1 日之前的文档。 | + | | after | 字段的时间值必须晚于你选择的时间。例如,`after "2024-01-01"` 会返回所有标记为 2024 年 1 月 1 日之后的文档。 | + | | is empty | 字段的时间值为空。如果你配置了此条件,可以检索到未标记该时间信息的文档。 | + | | is not empty | 字段的时间值不为空。如果你配置了此条件,可以检索到标记了该时间信息的文档。 | + + 5. 选择并添加元数据筛选值: + - **变量**:选择 **变量(Variable)**,并选择该**聊天流/工作流**中需要用于筛选文档的变量。 + + ![variable](https://assets-docs.dify.ai/2025/03/4c2c55ffcf0f72553fabdf23f86597d0.png) + + - **常量**:选择 **常量(Constant)**,并手动输入你需要的常量值。 + + > **时间** 字段类型仅支持使用常量筛选文档。如果你选用时间字段筛选文档,系统会弹出时间选择器,供你选择具体的时间节点。 + + ![date_picker](https://assets-docs.dify.ai/2025/03/593da1575ddc995d938bd0cc3847cf3c.png) + + + 当你输入常量筛选值时,该筛选值必须与该元数据字段值的文本完全一致,系统才能返回该文档。例如,当你设置筛选条件为 `starts with "App"` 或 `contains "App"` 时,系统会返回标记为 “Apple” 的文档,但不会返回标记为 “apple” 或 “APPLE” 的文档。 + + + 6. 配置筛选条件之间的逻辑关系 `AND` 或 `OR`。 + - `AND`:当一个文档满足所有筛选条件时,才能检索到该文档。 + - `OR`:只要一个文档满足其中任意一个筛选条件,就可以检索到该文档。 + + ![logic](https://assets-docs.dify.ai/2025/03/822dac015308dc5c01768afc0697c1ad.png) + + 7. 关闭弹窗,系统将自动保存你的选择。 + +#### 聊天助手 + +在**聊天助手**中,**元数据筛选** 功能位于界面左下方的 **上下文** 板块下方,配置方法与**聊天流/工作流**中的操作一致。你可以按照相同的步骤配置元数据筛选条件。 + +![chatbot](https://assets-docs.dify.ai/2025/03/9d9a64bde687a686f24fd99d6f193c57.png) + +### 在知识库内查看已关联的应用 + +知识库将会在左侧信息栏中显示已关联的应用数量。将鼠标悬停至圆形信息图标时将显示所有已关联的 Apps 列表,点击右侧的跳转按钮即可快速查看对应的应用。 + +![查看已关联的应用](https://assets-docs.dify.ai/2024/12/28899b9b0eba8996f364fb74e5b94c7f.png) + +### 常见问题 + +1. **如何选择多路召回中的 Rerank 设置?** + +如果用户知道确切的信息或术语,可以通过关键词检索精确发挥匹配结果,那么请将 “权重设置” 中的**关键词设置为 1**。 + +如果知识库内并未出现确切词汇,或者存在跨语言查询的情况,那么推荐使用 “权重设置” 中的**语义设置为 1**。 + +如果业务人员对于用户的实际提问场景比较熟悉,想要主动调整语义或关键词的比值,那么推荐自行调整 “权重设置” 里的比值。 + +如果知识库内容较为复杂,无法通过语义或关键词等简单条件进行匹配,同时要求较为精准的回答,愿意支付额外的费用,那么推荐使用 **Rerank 模型** 进行内容检索。 + +2. **为什么会出现找不到 “权重设置” 或要求必须配置 Rerank 模型等情况,应该如何处理?** + +以下是知识库检索方式对文本召回的影响情况: + +![](https://assets-docs.dify.ai/2025/03/3e581e276770632a508bb311d4b35add.png) + +3. **引用多个知识库时,无法调整 “权重设置”,提示错误应如何处理?** + +出现此问题是因为上下文内所引用的多个知识库内所使用的嵌入模型(Embedding)不一致,为避免检索内容冲突而出现此提示。推荐设置在“模型供应商”内设置并启用 Rerank 模型,或者统一知识库的检索设置。 + +4. **为什么在多路召回模式下找不到“权重设置”选项,只能看到 Rerank 模型?** + +请检查你的知识库是否使用了“经济”型索引模式。如果是,那么将其切换为“高质量”索引模式。 diff --git a/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance.mdx b/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance.mdx new file mode 100644 index 00000000..232142d7 --- /dev/null +++ b/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance.mdx @@ -0,0 +1,134 @@ +--- +title: 知识库管理与文档维护 +version: '简体中文' +--- + +## 知识库管理 + +> 知识库管理页仅面向团队所有者、团队管理员、编辑权限角色开放。 + +在 Dify 团队首页中,点击顶部的 “知识库” tab 页,选择需要管理的知识库,轻点左侧导航中的**设置**进行调整。你可以调整知识库名称、描述、可见权限、索引模式、Embedding 模型和检索设置。 + + + + + +**知识库名称**,用于区分不同的知识库。 + +**知识库描述**,用于描述知识库内文档代表的信息。 + +**可见权限**,提供 **「 只有我 」** 、 **「 所有团队成员 」** 和 **「部分团队成员」** 三种权限范围。不具有权限的人将无法访问该知识库。若选择将知识库公开至其它成员,则意味着其它成员同样具备该知识库的查看、编辑和删除权限。 + +**索引模式**,详细说明请[参考文档](/zh-cn/user-guide/knowledge-base/knowledge-base-creation/upload-documents#3)。 + +**Embedding 模型**, 修改知识库的嵌入模型,修改 Embedding 模型将对知识库内的所有文档重新嵌入,原先的嵌入将会被删除。 + +**检索设置**,详细说明请[参考文档](/zh-cn/user-guide/knowledge-base/knowledge-base-creation/upload-documents#4)。 + +*** + +### 使用 API 维护知识库 + +Dify 知识库提供整套标准 API ,开发者通过 API 调用对知识库内的文档、分段进行增删改查等日常管理维护操作,请参考[知识库 API 文档](maintain-dataset-via-api.md)。 + + + + + +## 维护知识库中的文本 + +### 添加文档 + +知识库(Knowledge)是一些文档(Documents)的集合。文档可以由开发者或运营人员上传,或由同步其它数据源(对应数据源中的一个文件单位,例如 Notion 库内的一篇文档或新的在线文档网页)。 + +点击 「知识库」 > 「文档列表 ,然后轻点 「 添加文件 」,即可在已创建的知识库内上传新的文档。 + + + + + +*** + +### 禁用或归档文档 + +**禁用**:数据集支持将暂时不想被索引的文档或分段进行禁用,在数据集文档列表,点击禁用按钮,则文档被禁用;也可以在文档详情,点击禁用按钮,禁用整个文档或某个分段,禁用的文档将不会被索引。禁用的文档点击启用,可以取消禁用。 + +**归档**:一些不再使用的旧文档数据,如果不想删除可以将它进行归档,归档后的数据就只能查看或删除,不可以进行编辑。在数据集文档列表,点击归档按钮,则文档被归档,也可以在文档详情,归档文档。归档的文档将不会被索引。归档的文档也可以点击撤销归档。 + +*** + +### 查看文本分段 + +知识库内已上传的每个文档都会以文本分段(Chunks)的形式进行存储,你可以在分段列表内查看每一个分段的具体文本内容。 + + + + + +*** + +### 检查分段质量 + +文档分段对于知识库应用的问答效果有明显影响,在将知识库与应用关联之前,建议人工检查分段质量。 + +通过字符长度、标识符或者 NLP 语义分段等机器自动化的分段方式虽然能够显著减少大规模文本分段的工作量,但分段质量与不同文档格式的文本结构、前后文的语义联系都有关系,通过人工检查和订正可以有效弥补机器分段在语义识别方面的缺点。 + +检查分段质量时,一般需要关注以下几种情况: + +* **过短的文本分段**,导致语义缺失; + + + + + +* **过长的文本分段**,导致语义噪音影响匹配准确性; + + + + + +* **明显的语义截断**,在使用最大分段长度限制时会出现强制性的语义截断,导致召回时缺失内容; + + + + + +*** + +### 添加文本分段 + +在分段列表内点击 「 添加分段 」 ,可以在文档内自行添加一个或批量添加多个自定义分段。 + + + + + +批量添加分段时,你需要先下载 CSV 格式的分段上传模板,并按照模板格式在 Excel 内编辑所有的分段内容,再将 CSV 文件保存后上传。 + + + + + +*** + +### ![]()编辑文本分段 + +在分段列表内,你可以对已添加的分段内容直接进行编辑修改。包括分段的文本内容和关键词。 + + + + + +*** + +### 元数据管理 + +除了用于标记不同来源文档的元数据信息,例如网页数据的标题、网址、关键词、描述等。元数据将被用于知识库的分段召回过程中,作为结构化字段参与召回过滤或者显示引用来源。 + + + 元数据过滤及引用来源功能当前版本尚未支持。 + + + + + diff --git a/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/introduction.mdx b/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/introduction.mdx new file mode 100644 index 00000000..06de5e21 --- /dev/null +++ b/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/introduction.mdx @@ -0,0 +1,48 @@ +--- +title: 管理知识库 +--- + +> 知识库管理页仅面向团队所有者、团队管理员、拥有编辑权限的角色开放。 + +点击 Dify 平台顶部的“知识库”按钮,选择需要管理的知识库。轻点左侧导航中的**设置**进行调整。 + +你可以在此处调整知识库名称、描述、可见权限、索引模式、Embedding 模型和检索设置。 + +![知识库设置](https://assets-docs.dify.ai/2024/12/20fc93428f8f20f7acfce665c4ed4ddf.png) + +* **知识库名称**,用于区分不同的知识库。 +* **知识库描述**,用于描述知识库内文档代表的信息。 +* **可见权限**,提供 **“只有我”** 、**“所有团队成员”** 和 **“部分团队成员”** 三种权限范围。不具有权限的人将无法访问该知识库。若选择将知识库公开至其它成员,则意味着其它成员同样具备该知识库的查看、编辑和删除权限。 +* **索引方法**,详细说明请参考[索引方法文档](../create-knowledge-and-upload-documents/setting-indexing-methods)。 +* **Embedding 模型**, 修改知识库的嵌入模型,修改 Embedding 模型将对知识库内的所有文档重新嵌入,原先的嵌入将会被删除。 +* **检索设置**,详细说明请参考[检索设置文档](../create-knowledge-and-upload-documents/setting-indexing-methods)。 + +*** + +### 查看知识库内已关联的应用 + +知识库将会在左侧信息栏中显示已关联的应用数量。将鼠标悬停至圆形信息图标时将显示所有已关联的 Apps 列表,点击右侧的跳转按钮即可快速查看对应的应用。 + +![查看已关联应用](https://assets-docs.dify.ai/2024/12/28899b9b0eba8996f364fb74e5b94c7f.png) + +*** + +你可以通过网页维护或 API 两种方式维护知识库内的文档。 + +#### 维护知识库内文档 + +支持管理知识库内的文档和对应的文档分段。详细说明请参考以下文档: + + + 详细说明请参考此文档 + + +#### 使用 API 维护知识库 + +Dify 知识库提供整套标准 API ,开发者通过 API 调用对知识库内的文档、分段进行增删改查等日常管理维护操作,请参考以下文档: + + + 详细说明请参考此文档 + + +![](https://assets-docs.dify.ai/2025/03/59f9f945a1d20a26c87662292d577db2.png) diff --git a/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.mdx b/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.mdx new file mode 100644 index 00000000..a336a3e5 --- /dev/null +++ b/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.mdx @@ -0,0 +1,695 @@ +--- +title: 通过 API 维护知识库 +version: '简体中文' +--- + +> 鉴权、调用方式与应用 Service API 保持一致,不同之处在于,所生成的单个知识库 API token 具备操作当前账号下所有可见知识库的权限,请注意数据安全。 + +### 使用知识库 API 的优势 + +通过 API 维护知识库可大幅提升数据处理效率,你可以通过命令行轻松同步数据,实现自动化操作,而无需在用户界面进行繁琐操作。 + +主要优势包括: + +* 自动同步: 将数据系统与 Dify 知识库无缝对接,构建高效工作流程; +* 全面管理: 提供知识库列表,文档列表及详情查询等功能,方便你自建数据管理界面; +* 灵活上传: 支持纯文本和文件上传方式,可针对分段(Chunks)内容的批量新增和修改操作; +* 提高效率: 减少手动处理时间,提升 Dify 平台使用体验。 + +### 如何使用 + +进入知识库页面,在左侧的导航中切换至 **API** 页面。在该页面中你可以查看 Dify 提供的知识库 API 文档,并可以在 **API 密钥** 中管理可访问知识库 API 的凭据。 + + + + + +### API 调用示例 + +#### 通过文本创建文档 + +输入示例: + +```json +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/document/create_by_text' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' \ +--data-raw '{"name": "text","text": "text","indexing_technique": "high_quality","process_rule": {"mode": "automatic"}}' +``` + +输出示例: + +```json +{ + "document": { + "id": "", + "position": 1, + "data_source_type": "upload_file", + "data_source_info": { + "upload_file_id": "" + }, + "dataset_process_rule_id": "", + "name": "text.txt", + "created_from": "api", + "created_by": "", + "created_at": 1695690280, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false, + "display_status": "queuing", + "word_count": 0, + "hit_count": 0, + "doc_form": "text_model" + }, + "batch": "" +} +``` + +#### 通过文件创建文档 + +输入示例: + +```json +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/document/create_by_file' \ +--header 'Authorization: Bearer {api_key}' \ +--form 'data="{"indexing_technique":"high_quality","process_rule":{"rules":{"pre_processing_rules":[{"id":"remove_extra_spaces","enabled":true},{"id":"remove_urls_emails","enabled":true}],"segmentation":{"separator":"###","max_tokens":500}},"mode":"custom"}}";type=text/plain' \ +--form 'file=@"/path/to/file"' +``` + +输出示例: + +```json +{ + "document": { + "id": "", + "position": 1, + "data_source_type": "upload_file", + "data_source_info": { + "upload_file_id": "" + }, + "dataset_process_rule_id": "", + "name": "Dify.txt", + "created_from": "api", + "created_by": "", + "created_at": 1695308667, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false, + "display_status": "queuing", + "word_count": 0, + "hit_count": 0, + "doc_form": "text_model" + }, + "batch": "" +} + +``` + +#### **创建空知识库** + + + 仅用来创建空知识库 + + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' \ +--data-raw '{"name": "name", "permission": "only_me"}' +``` + +输出示例: + +```json +{ + "id": "", + "name": "name", + "description": null, + "provider": "vendor", + "permission": "only_me", + "data_source_type": null, + "indexing_technique": null, + "app_count": 0, + "document_count": 0, + "word_count": 0, + "created_by": "", + "created_at": 1695636173, + "updated_by": "", + "updated_at": 1695636173, + "embedding_model": null, + "embedding_model_provider": null, + "embedding_available": null +} +``` + +#### **知识库列表** + +输入示例: + +```bash +curl --location --request GET 'https://api.dify.ai/v1/datasets?page=1&limit=20' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +{ + "data": [ + { + "id": "", + "name": "知识库名称", + "description": "描述信息", + "permission": "only_me", + "data_source_type": "upload_file", + "indexing_technique": "", + "app_count": 2, + "document_count": 10, + "word_count": 1200, + "created_by": "", + "created_at": "", + "updated_by": "", + "updated_at": "" + }, + ... + ], + "has_more": true, + "limit": 20, + "total": 50, + "page": 1 +} +``` + +#### 删除知识库 + +输入示例: + +```json +curl --location --request DELETE 'https://api.dify.ai/v1/datasets/{dataset_id}' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +204 No Content +``` + +#### 通过文本更新文档 + +此接口基于已存在知识库,在此知识库的基础上通过文本更新文档 + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/update_by_text' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' \ +--data-raw '{"name": "name","text": "text"}' +``` + +输出示例: + +```json +{ + "document": { + "id": "", + "position": 1, + "data_source_type": "upload_file", + "data_source_info": { + "upload_file_id": "" + }, + "dataset_process_rule_id": "", + "name": "name.txt", + "created_from": "api", + "created_by": "", + "created_at": 1695308667, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false, + "display_status": "queuing", + "word_count": 0, + "hit_count": 0, + "doc_form": "text_model" + }, + "batch": "" +} +``` + +#### 通过文件更新文档 + +此接口基于已存在知识库,在此知识库的基础上通过文件更新文档的操作。 + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/update_by_file' \ +--header 'Authorization: Bearer {api_key}' \ +--form 'data="{"name":"Dify","indexing_technique":"high_quality","process_rule":{"rules":{"pre_processing_rules":[{"id":"remove_extra_spaces","enabled":true},{"id":"remove_urls_emails","enabled":true}],"segmentation":{"separator":"###","max_tokens":500}},"mode":"custom"}}";type=text/plain' \ +--form 'file=@"/path/to/file"' +``` + +输出示例: + +```json +{ + "document": { + "id": "", + "position": 1, + "data_source_type": "upload_file", + "data_source_info": { + "upload_file_id": "" + }, + "dataset_process_rule_id": "", + "name": "Dify.txt", + "created_from": "api", + "created_by": "", + "created_at": 1695308667, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false, + "display_status": "queuing", + "word_count": 0, + "hit_count": 0, + "doc_form": "text_model" + }, + "batch": "20230921150427533684" +} +``` + + +#### **获取文档嵌入状态(进度)** + +输入示例: + +```bash +curl --location --request GET 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{batch}/indexing-status' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +{ + "data":[{ + "id": "", + "indexing_status": "indexing", + "processing_started_at": 1681623462.0, + "parsing_completed_at": 1681623462.0, + "cleaning_completed_at": 1681623462.0, + "splitting_completed_at": 1681623462.0, + "completed_at": null, + "paused_at": null, + "error": null, + "stopped_at": null, + "completed_segments": 24, + "total_segments": 100 + }] +} +``` + +#### **删除文档** + +输入示例: + +```bash +curl --location --request DELETE 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```bash +{ + "result": "success" +} +``` + +#### **知识库文档列表** + +输入示例: + +```bash +curl --location --request GET 'https://api.dify.ai/v1/datasets/{dataset_id}/documents' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +{ + "data": [ + { + "id": "", + "position": 1, + "data_source_type": "file_upload", + "data_source_info": null, + "dataset_process_rule_id": null, + "name": "dify", + "created_from": "", + "created_by": "", + "created_at": 1681623639, + "tokens": 0, + "indexing_status": "waiting", + "error": null, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "archived": false + }, + ], + "has_more": false, + "limit": 20, + "total": 9, + "page": 1 +} +``` + +#### **新增分段** + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/segments' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' \ +--data-raw '{"segments": [{"content": "1","answer": "1","keywords": ["a"]}]}' +``` + +输出示例: + +```json +{ + "data": [{ + "id": "", + "position": 1, + "document_id": "", + "content": "1", + "answer": "1", + "word_count": 25, + "tokens": 0, + "keywords": [ + "a" + ], + "index_node_id": "", + "index_node_hash": "", + "hit_count": 0, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "status": "completed", + "created_by": "", + "created_at": 1695312007, + "indexing_at": 1695312007, + "completed_at": 1695312007, + "error": null, + "stopped_at": null + }], + "doc_form": "text_model" +} + +``` + +### 查询文档分段 + +输入示例: + +```bash +curl --location --request GET 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/segments' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' +``` + +输出示例: + +```bash +{ + "data": [{ + "id": "", + "position": 1, + "document_id": "", + "content": "1", + "answer": "1", + "word_count": 25, + "tokens": 0, + "keywords": [ + "a" + ], + "index_node_id": "", + "index_node_hash": "", + "hit_count": 0, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "status": "completed", + "created_by": "", + "created_at": 1695312007, + "indexing_at": 1695312007, + "completed_at": 1695312007, + "error": null, + "stopped_at": null + }], + "doc_form": "text_model" +} +``` + +### 删除文档分段 + +输入示例: + +```bash +curl --location --request DELETE 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/segments/{segment_id}' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json' +``` + +输出示例: + +```bash +{ + "result": "success" +} +``` + +### 更新文档分段 + +输入示例: + +```bash +curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/segments/{segment_id}' \ +--header 'Authorization: Bearer {api_key}' \ +--header 'Content-Type: application/json'\ +--data-raw '{"segment": {"content": "1","answer": "1", "keywords": ["a"], "enabled": false}}' +``` + +输出示例: + +```bash +{ + "data": [{ + "id": "", + "position": 1, + "document_id": "", + "content": "1", + "answer": "1", + "word_count": 25, + "tokens": 0, + "keywords": [ + "a" + ], + "index_node_id": "", + "index_node_hash": "", + "hit_count": 0, + "enabled": true, + "disabled_at": null, + "disabled_by": null, + "status": "completed", + "created_by": "", + "created_at": 1695312007, + "indexing_at": 1695312007, + "completed_at": 1695312007, + "error": null, + "stopped_at": null + }], + "doc_form": "text_model" +} +``` + +#### 新增知识库元数据字段 + +输入示例: + +```bash +curl --location 'https://api.dify.ai/v1/datasets/{dataset_id}/metadata' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Bearer {api_key}' \ +--data '{ + "type":"string", + "name":"test" +}' +``` + +输出示例: + +```json +{ + "id": "9f63c91b-d60e-4142-bb0c-c81a54dc2db5", + "type": "string", + "name": "test" +} +``` + +#### 修改知识库元数据字段 + +输入示例: + +```bash +curl --location --request PATCH 'https://api.dify.ai/v1/datasets/{dataset_id}/metadata/{metadata_id}' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Bearer {api_key}' \ +--data '{ + "name":"test" +}' +``` + +输出示例: + +```json +{ + "id": "9f63c91b-d60e-4142-bb0c-c81a54dc2db5", + "type": "string", + "name": "test" +} +``` + +#### 删除知识库元数据字段 + +输入示例: + +```bash +curl --location --request DELETE 'https://api.dify.ai/v1/datasets/{dataset_id}/document/metadata/{metadata_id}' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```bash +200 success +``` + +#### 启用/禁用知识库元数据中的内置字段 + +输入示例: + +```bash +curl --location --request DELETE 'https://api.dify.ai/v1/datasets/{dataset_id}/document/metadata/built-in/{action}' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +200 success +``` + +#### 修改文档的元数据(赋值) + +输入示例: + +```bash +curl --location 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/metadata' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Bearer {api_key}' +--data '{ + "operation_data":[ + { + "document_id": "3e928bc4-65ea-4201-87c8-cbcc5871f525", + "metadata_list": [ + { + "id": "1887f5ec-966f-4c93-8c99-5ad386022f46", + "value": "dify", + "name": "test" + } + ] + } + ] +}' +``` + +输出示例: + +```json +200 success +``` + +#### 数据集的元数据列表 + +输入示例: + +```bash +curl --location 'https://api.dify.ai/v1/datasets/{dataset_id}/metadata' \ +--header 'Authorization: Bearer {api_key}' +``` + +输出示例: + +```json +{ + "doc_metadata": [ + { + "id": "550e8400-e29b-41d4-a716-446655440000", + "type": "string", + "name": "title", + "use_count": 42 + }, + { + "id": "6ba7b810-9dad-11d1-80b4-00c04fd430c8", + "type": "number", + "name": "price", + "use_count": 28 + }, + { + "id": "7ba7b810-9dad-11d1-80b4-00c04fd430c9", + "type": "time", + "name": "created_at", + "use_count": 35 + } + ], + "built_in_field_enabled": true +} +``` + +### 错误信息 + +| 错误信息 | 错误码 | 原因描述 | +|------|--------|---------| +| no_file_uploaded | 400 | 请上传你的文件 | +| too_many_files | 400 | 只允许上传一个文件 | +| file_too_large | 413 | 文件大小超出限制 | +| unsupported_file_type | 415 | 不支持的文件类型。目前只支持以下内容格式:`txt`, `markdown`, `md`, `pdf`, `html`, `html`, `xlsx`, `docx`, `csv` | +| high_quality_dataset_only | 400 | 当前操作仅支持"高质量"知识库 | +| dataset_not_initialized | 400 | 知识库仍在初始化或索引中。请稍候 | +| archived_document_immutable | 403 | 归档文档不可编辑 | +| dataset_name_duplicate | 409 | 知识库名称已存在,请修改你的知识库名称 | +| invalid_action | 400 | 无效操作 | +| document_already_finished | 400 | 文档已处理完成。请刷新页面或查看文档详情 | +| document_indexing | 400 | 文档正在处理中,无法编辑 | +| invalid_metadata | 400 | 元数据内容不正确。请检查并验证 | diff --git a/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx b/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx new file mode 100644 index 00000000..b46ab4df --- /dev/null +++ b/ja-jp/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.mdx @@ -0,0 +1,133 @@ +--- +title: 维护知识库内文档 +--- + +### 添加文档 + +知识库是文档的集合。文档支持本地上传,或导入其它在线数据源。知识库内的文档对应数据源中的一个文件单位,例如 Notion 库内的一篇文档或新的在线文档网页。 + +点击“知识库” → “文档列表” → “添加文件”,在已创建的知识库内上传新的文档。 + +![在知识库内上传新文档](https://assets-docs.dify.ai/2024/12/424ab491aaebe09b490a36d26c9fa8da.png) + +### 启用 / 禁用 / 归档 / 删除文档 + +**启用**:处于正常使用状态的文档,支持编辑内容与被知识库检索。对于已被禁用的文档,允许重新启用。已归档的文档需撤销归档状态后才能重新启用。 + +**禁用**:对于不希望在使用 AI 应用时被检索的文档,可以关闭文档右侧的蓝色开关按钮以禁用文档。禁用文档后,仍然可以编辑当前内容。 + +**归档**:对于一些不再使用的旧文档数据,如果不想删除可以将其归档。归档后的数据就只能查看或删除,无法重新编辑。你可以在知识库文档列表,点击归档按钮;或在文档详情页内进行归档。**归档操作支持撤销。** + +**删除**:⚠️ 危险操作。对于一些错误文档或明显有歧义的内容,可以点击文档右侧菜单按钮中的删除。删除后的内容将无法被找回,请进行谨慎操作。 + +> 以上选项均支持选中多个文档后批量操作。 + +![禁用或归档文档](https://assets-docs.dify.ai/2024/12/5e0e64859a1ac51602d167ec55ef9350.png) + +### 注意事项 + +* 对于 Sandbox/Free 版本用户,未使用知识库的将在 **7 天** 后自动禁用; +* 对于 Professional/Team 版本用户,未使用知识库的将在 **30 天** 后自动禁用。 + +![一键恢复被禁用的文档](https://assets-docs.dify.ai/2024/12/bf6485b17aec716741eb65e307c2274c.png) + +*** + +## 管理文本分段 + +### 查看文本分段 + +知识库内已上传的每个文档都会以文本分段(Chunks)形式进行存储。点击文档标题,在详情页中查看当前文档的分段列表,每页默认展示 10 个区块,你可以在网页底部调整每页的展示数量。 + +每个内容区块展示前 2 行的预览内容。若需要查看更加分段内的完整内容,轻点“展开分段”按钮即可查看。 + +![展开内容分段](https://assets-docs.dify.ai/2024/12/86cc80f17fab1eea75aa73ee681e4663.png) + +你可以通过筛选栏快速查看所有已启用 / 未启用的文档。 + +![筛选文档分段](https://assets-docs.dify.ai/2025/01/47ef07319175a102bfd1692dcc6cac9b.png) + +*** + +### 检查分段质量 + +文档分段对于知识库应用的问答效果有明显影响,在将知识库与应用关联之前,建议人工检查分段质量。 + +检查分段质量时,一般需要关注以下几种情况: + +* **过短的文本分段**,导致语义缺失; +* **过长的文本分段**,导致语义噪音影响匹配准确性; +* **明显的语义截断**,在使用最大分段长度限制时会出现强制性的语义截断,导致召回时缺失内容; + +![分段质量检查示例](https://assets-docs.dify.ai/2024/12/ee081e98c1649aea4a5c2b15b88e11aa.png) +![分段质量检查示例](https://assets-docs.dify.ai/2024/12/ac47381ae4be183768dd025c37c049fa.png) +![分段质量检查示例](https://assets-docs.dify.ai/2024/12/b8ab7ac84028b0b16c3948f35015e069.png) + +*** + +### 添加文本分段 + +知识库中的文档支持单独添加文本分段,不同的分段模式对应不同的分段添加方法。 + + +添加文本分段为付费功能,请前往[此处](https://dify.ai/pricing)升级账号以使用功能。 + + + + + 点击分段列表顶部的“添加分段”按钮,可以在文档内自行添加一个或批量添加多个自定义分段。 + + ![通用模式 - 添加分段](https://assets-docs.dify.ai/2024/12/552ff4ab9e77130ad09aaef878b19cc9.png) + + + 点击分段列表顶部的「 添加分段 」按钮,可以在文档内自行添加一个或批量添加多个自定义**父分段。** + + ![父子模式 — 添加分区](https://assets-docs.dify.ai/2024/12/ed4be3bf178e3a41d53bcc10255ad3b2.png) + + 填写内容后,勾选尾部“连续新增”钮后,可以继续添加文本。 + + + +*** + +### 编辑文本分段 + + + + 你可以对已添加的分段内容直接进行编辑或修改,包括修改分段内的文本内容或关键词。 + + ![编辑文档分段](https://assets-docs.dify.ai/2024/12/8220e412e4c5a2bf729fb5dfcc1b7f4c.png) + + + 父分段包含其本身所包含的子分段内容,两者相互独立。你可以单独修改父分段或子分段的内容。 + + ![修改父分段](https://assets-docs.dify.ai/2024/12/7eedfee59a3c978cc4a29d9cf06fbbcc.png) + + 修改父分段后,点击 **“保存”** 后将不会影响子分段的内容。如需重新生成子分段内容,轻点 **“保存并重新生成子分段”**。 + + + +### 修改已上传文档的文本分段 + +已创建的知识库支持重新配置文档分段。 + + + + - 可在单个分段内保留更多上下文,适合需要处理复杂或上下文相关任务的场景。 + - 分段数量减少,从而降低处理时间和存储需求。 + + + - 提供更高的粒度,适合精确提取或总结文本内容。 + - 减少超出模型 token 限制的风险,更适配限制严格的模型。 + + + +你可以访问 **分段设置**,点击 **保存并处理** 按钮以保存对分段设置的修改,并重新触发当前文档的分段流程。当你保存设置并完成嵌入处理后,文档的分段列表将自动更新。 + +![Chunk Settings](https://assets-docs.dify.ai/2025/01/36cb20be8aae1f368ebf501c0d579051.png) + +*** + +### 元数据管理 + +如需了解元数据的相关信息,请参阅 [元数据](../metadata)。 diff --git a/ja-jp/guides/knowledge-base/knowledge-base-creation/introduction.mdx b/ja-jp/guides/knowledge-base/knowledge-base-creation/introduction.mdx new file mode 100644 index 00000000..b52f9e86 --- /dev/null +++ b/ja-jp/guides/knowledge-base/knowledge-base-creation/introduction.mdx @@ -0,0 +1,143 @@ +--- +title: ナレッジベース作成 +--- + +ナレッジベースの作成および文書のアップロード手順は、主に以下のステップから成り立っています: + +1. ナレッジベースを新規作成し、ローカルの文書や[オンラインのデータ](./import-online-datasource/README.md)を取り込みます。 +2. 文書を分割する際のモードを選び、その効果をプレビューします。 +3. 検索機能のためのインデックス設定と検索オプションを構成します。 +4. 文書の分割処理が完了するまで待ちます。 +5. アップロードが完了したら、アプリ内でナレッジベースを利用開始します 🎉 + +各ステップの詳細について説明します: + +## 1. ナレッジベースの新規作成 + +Difyプラットフォームのトップメニューより **「ナレッジベース」→「新規作成」** を選択します。文書は、ローカルファイルのアップロードまたはオンラインデータの取り込みによってナレッジベースに追加できます。 + +* ローカルファイルのアップロード:ファイルをドラッグ&ドロップまたは選択してアップロードします。**一度に多数のファイルをアップロード**することが可能ですが、その上限は[サブスクリプションプラン](https://dify.ai/pricing)に依存します。 + + ローカルファイルのアップロードには以下の制約があります: + + * 一度にアップロードできる最大サイズは**15MB**です; + + * 使用しているSaaS[サブスクリプションプラン](https://dify.ai/pricing)によって、**一括アップロード可能なファイル数、文書の総アップロード数、ベクトルストレージ**の利用可能容量が制限されます。 + + ![ナレッジベースの作成](https://assets-docs.dify.ai/2024/12/effc826d2584d5f2983cdcd746099bb6.png) + +* オンラインデータの取り込み:ナレッジベース作成時に[オンラインデータの取り込み](./import-online-datasource/README.md)が可能で、詳細はオンラインデータ取り込みのガイドを参照してください。オンラインデータソースを利用するナレッジベースには、後からローカルの文書を追加したり、ローカルファイルタイプのナレッジベースへ変更したりすることはできません。これは、複数のデータソースが混在すると管理が複雑になるためです。 + +* 文書がまだ準備できていない場合でも、空のナレッジベースを先に作成し、後ほどローカル文書をアップロードしたり、オンラインデータを取り込んだりすることができます。 + +## 2. コンテンツ分割の指定方法 + +コンテンツをナレッジベースにアップロードした後の次のステップは、コンテンツの分割とデータのクレンジングです。**このステップでは、コンテンツの前処理とデータの構造化が行われ、長いテキストは複数のセクションに分けられます。** LLMはユーザーからの質問を受け取った際、ナレッジベース内のセクションをどれだけ正確に検索し取り出せるかで、その質問に対する正確な回答が可能かどうかが決まります。詳細については、[コンテンツ分割の指定方法](./chunking-and-cleaning-text.md)をご参照ください。 + +以下の2つの分割モードがあります: + +* **汎用分割モード** + + このモードでは、システムがユーザーが定義したルールに従ってコンテンツを独立したセクションに分けます。質問が入力されると、システムはその質問のキーワードを自動で分析し、これらのキーワードとナレッジベース内のセクションとの関連度を計算します。そして、関連度に基づいてセクションをランキングし、最も関連性の高いセクションを選びLLMへ送り、処理して回答を得ます。 + + + **注意**:以前の **「自動分割とクリーニング」モード** は **汎用分割モード** に自動的に更新されました。何も変更する必要はなく、デフォルト設定をそのまま使用し続けることができます。 + + +* **親子分割モード(階層分割モード)** + + 二層の構造を採用し、検索精度とコンテキスト情報のバランスを取ります。このモードでは、親セクション(Parent-chunk)がより大きなテキスト単位(例えば段落)を包含し、豊富なコンテキスト情報を提供します。子セクション(Child-chunk)はより小さなテキスト単位(例えば文)で、精確な検索に利用されます。システムは最初に子セクションを通じて精確な検索を行い関連性を確保した後、対応する親セクションを取得しコンテキスト情報を補完し、レスポンスを生成する際に正確さを保ちながら完全な背景情報を提供します。セクションの分割方法は、区切り文字と最大長さの設定を通してカスタマイズできます。 + +ナレッジベースを初めて作成する際は、[親子分割モード](./chunking-and-cleaning-text.md)を選択し、デフォルトのオプションを使用してナレッジベースの作成を行うことを推奨します。コンテンツセクションをカスタマイズしたい場合は、[分割ルール](./chunking-and-cleaning-text.md)を参照し、正規表現の文法に従って設定してください。 + +![汎用分割モードと親子分割モード](https://assets-docs.dify.ai/2024/12/b3052a6aae6e4d0e5701dde3a859e326.png) + + +**注意**:分割モードを選択し、ナレッジベースの作成を完了した後は、後からモードを変更することはできません。ナレッジベースにドキュメントを新たに追加する場合も、選択したコンテンツ分割戦略に従います。 + + +3. インデックス設定方法 + +コンテンツを構造化する前処理(分割とクリーニング)を行った後、構造化されたコンテンツに対してどのように検索を行うかの設定が必要です。検索エンジンが効率的なインデックスアルゴリズムを用いて、ユーザーの問い合わせに最も関連性の高い検索結果を提供できるように、インデックスの設定方法が重要です。これは、LLMがナレッジベースから情報を検索する効率と回答の精度に直接影響します。 + +以下に、三つのインデックス設定方法を紹介します。詳細は[インデックス設定方法](./setting-indexing-methods.md)をご覧ください。 + +* **高品質** + + エンベッディングモデル(Embeddingモデル)を利用して、分割されたテキストブロックを数値ベクトルに変換し、大量のテキスト情報をより効率的に圧縮・保管します。これにより、ユーザーの問い合わせとテキストとのマッチングがより精密に行われます。 + +* **経済的** + + 各テキストブロックごとに10個のキーワードを用いて検索を行います。精度は落ちますが、追加のコストはかかりません。 + +* **Q&Aモード(コミュニティ版のみ対応)** + + ナレッジベースへの文書アップロード時に、システムがテキストを分割して要約し、各ブロックごとにQ&Aのペアを生成します。FAQ形式の文書に適しています。 + +**高品質なインデックス設定方法**の利用を推奨します。 + +![インデックス設定方法](https://assets-docs.dify.ai/2024/12/d121beab4a688067ff482f2c33b7a1a3.png) + +## 4. 検索方法の選定 + +ユーザーからの問い合わせを受けた後、ナレッジベースは関連する情報を既存のドキュメントから見つけ出すために検索方法を用いる。ビジネスの要求やデータの特徴に合わせて、検索方法を柔軟に組み合わせたり変更したりすることで、より効果的かつ正確な検索結果を提供できる。 + +異なるインデックス作成方法によって、様々な検索オプションが提供される。詳細は[検索方法の選定](./selecting-retrieval-settings.md)セクションを参照。 + +* **高品質インデックス** + + * **ベクトル検索** + + ユーザーの質問をベクトル化し、クエリテキストの数値ベクトルを作成。このクエリベクトルとナレッジベース内のテキストベクトルとの距離を比較し、最も近い内容を探索する。 + + * **全文検索** + + キーワードによる検索で、ドキュメント内の全単語を索引付ける。ユーザーが質問を提出すると、そのキーワードでナレッジベース内の適切なテキスト部分を検索し、一致する内容を返す。これは検索エンジンにおける一般的な全文検索と似ている。 + + * **ハイブリッド検索**(推奨) + + 全文検索とベクトル検索を同時に行い、リランクモデルを用いて両方の検索結果から最も適切な回答を選び出す。 + +* **経済的安いインデックス** + + * **インバーテッド検索** + + インバーテッド検索は、ドキュメント内のキーワードを迅速に検索するための索引構造で、オンライン検索エンジンで広く使用されている。 + +![検索方法の選定後](https://assets-docs.dify.ai/2024/12/9b02fc353324221cc91f185a350775b6.png) + +選んだ検索方式に基づき、[リコールテスト/引用帰属](../retrieval-test-and-citation.md)セクションを参照し、キーワードとコンテンツの一致度をテストできる。 + +## 5. アップロード完了 + +上述した設定を終えて、「保存して処理」ボタンをクリックすることで、ナレッジベースの作成が完了する。アプリ内でナレッジベースを統合する方法については、[ナレッジベースの統合](../integrate-knowledge-within-application.md)セクションを参照。ナレッジベースの更新や管理が必要な場合は、[ナレッジベースの管理と文書のメンテナンス](../knowledge-and-documents-maintenance.md)セクションをご覧ください。 + +![ナレッジベース作成完了](https://assets-docs.dify.ai/2024/12/a3362a1cd384cb2b539c9858de555518.png) + + +## 参考文献 + +### ETL + +RAGのプロダクションレベルのアプリケーションでは、データ召喚の効果を向上させるために、複数のデータソースを前処理およびクリーニングする必要があります。これをETL(抽出、変換、ロード)と呼びます。非構造化/半構造化データの前処理能力を強化するために、Difyは以下のオプションのETLソリューションをサポートしています:**Dify ETL** と[**Unstructured ETL**](https://unstructured.io/)。Unstructuredは、データを抽出してクリーンなデータに変換し、後続のステップに使用できるようにします。Difyの各バージョンでのETLソリューションの選択: + +* SaaS版では選択不可、デフォルトでUnstructured ETLを使用。 +* コミュニティ版では選択可能、デフォルトでDify ETLを使用、[環境変数](../../../getting-started/install-self-hosted/environments.md#zhi-shi-ku-pei-zhi)を介してUnstructured ETLを有効にできます。 + +ファイル解析のサポート形式の違い: + +| DIFY ETL | Unstructured ETL | +| ---------------------------------------------- | ------------------------------------------------------------------------ | +| txt、markdown、md、pdf、html、htm、xlsx、xls、docx、csv | txt、markdown、md、pdf、html、htm、xlsx、xls、docx、csv、eml、msg、pptx、ppt、xml、epub | + +異なるETLソリューションではファイル抽出の効果にも違いがあります。Unstructured ETLのデータ処理方法について詳細を知りたい場合は、[公式ドキュメント](https://docs.unstructured.io/open-source/core-functionality/partitioning)を参照してください。 + +### 埋め込み (Embedding) + +**埋め込み(Embedding)**は、単語や文章、あるいはドキュメント全体のような離散変数を、連続ベクトル表現に変換する技術を指します。この技術を用いることで、単語やフレーズ、画像などの高次元データを、より小さな次元空間にマッピングし、データを簡潔かつ効率的に表現できます。この方法は、データの次元を減らすだけでなく、重要な意味の情報も保持し、コンテンツの検索を効率化します。 + +**埋め込みモデル**は、テキストデータを数値ベクトル化することに特化した言語モデルの一種で、テキストを密度の高い数値ベクトルに変換し、その意味内容を効果的に表現することに長けています。 + +#### メタデータ + +メタデータ機能を使用してナレッジベースを管理する場合は、[メタデータ](https://docs.dify.ai/ja-jp/guides/knowledge-base/metadata)を参照してください。 diff --git a/ja-jp/guides/knowledge-base/knowledge-base-creation/upload-documents.mdx b/ja-jp/guides/knowledge-base/knowledge-base-creation/upload-documents.mdx new file mode 100644 index 00000000..cf2c9dbb --- /dev/null +++ b/ja-jp/guides/knowledge-base/knowledge-base-creation/upload-documents.mdx @@ -0,0 +1,254 @@ +--- +title: 创建知识库 & 上传文档 +version: '简体中文' +--- + +创建知识库并上传文档大致分为以下步骤: + +1. 在 Dify 团队内创建知识库,从本地选择你需要上传的文档; +2. 选择分段与清洗模式,预览效果; +3. 配置索引方式和检索设置; +4. 等待分段嵌入; +5. 完成上传,在应用内关联并使用 🎉 + +以下是各个步骤的详细说明: + +## 1 创建知识库 + +在 Dify 主导航栏中点击知识库,在该页面你可以看到团队内的知识库,点击“**创建知识库”** 进入创建向导。 + +- 拖拽或选中文件进行上传,批量上传的文件数量取决于[订阅计划](https://dify.ai/pricing); +- 如果还没有准备好文档,可以先创建一个空知识库; +- 如果你在创建知识库时选择了使用外部数据源(Notion 或同步 Web 站点),该知识库的类型不可更改;此举是为了防止单一知识库存在多数据源而造成的管理困难。如果你需要使用多个数据源,建议创建多个知识库并使用 [多路召回](/zh-cn/user-guide/knowledge-base/indexing-and-retrieval/rerank) 模式在同一个应用内引用多个知识库。 + +**上传文档存在以下限制:** + +- 单文档的上传大小限制为 15MB; + + + + + +*** + +## 2 选择分段与清洗策略 + +将内容上传至知识库后,需要先对内容进行分段与数据清洗,该阶段可以被理解为是对内容预处理与结构化。 + + + +* **分段** + + 大语言模型存在有限的上下文窗口,无法将知识库中的所有内容发送至 LLM。因此可以将整段长文本分段处理,再基于用户问题,召回与关联度最高的段落内容,即采用分段 TopK 召回模式。此外,将用户问题与文本分段进行语义匹配时,合适的分段大小有助于找到知识库内关联性最高的文本内容,减少信息噪音。 + +* **清洗** + + 为了保证文本召回的效果,通常需要在将数据录入知识库之前便对其进行清理。例如,如果文本内容中存在无意义的字符或者空行,可能会影响问题回复的质量。关于 Dify 内置的清洗策略,详细说明请参考 [ETL](create-knowledge-and-upload-documents.md#etl)。 + + + +支持以下两种策略: + +* **自动分段与清洗** +* **自定义** + + + + + + #### 自动分段与清洗 + + 自动模式适合对分段规则与预处理规则尚不熟悉的初级用户。在该模式下,Dify 将为你自动分段与清洗内容文件。 + + + Automatic segmentation and cleaning + + + + + + #### 自定义 + + 自定义模式适合对于文本处理有明确需求的进阶用户。在自定义模式下,你可以根据不同的文档格式和场景要求,手动配置文本的分段规则和清洗策略。 + + **分段规则:** + + * **分段标识符**,指定标识符,系统将在文本中出现该标识符时分段。例如填写 `\n`([正则表达式](https://regexr.com/)中的换行符),文本换行时将自动分段; + * **分段最大长度**,根据分段的文本字符数最大上限来进行分段,超出该长度时将强制分段。一个分段的最大长度为 4000 Tokens; + * **分段重叠长度**,分段重叠指的是在对数据进行分段时,段与段之间存在一定的重叠部分。这种重叠可以帮助提高信息的保留和分析的准确性,提升召回效果。建议设置为分段长度 Tokens 数的 10-25%; + + **文本预处理规则:** 文本预处理规则可以帮助过滤知识库内部分无意义的内容。 + + * 替换连续的空格、换行符和制表符; + * 删除所有 URL 和电子邮件地址; + + + + + + + +## 3 索引方式 + +指定内容的预处理方法(分段与清洗)后,接下来需要指定对结构化内容的索引方式。索引方式将直接影响 LLM 对知识库内容的检索效率以及回答的准确性。 + +系统提供以下三种索引方式,你可以根据实际需求调整每种方式内的[检索设置](create-knowledge-and-upload-documents.md#id-4-jian-suo-she-zhi): + +* 高质量 +* 经济 + + + + 在高质量模式下,将首先调用 Embedding 嵌入模型(支持切换)将已分段的文本转换为数字向量,帮助开发者更有效地实现大量文本信息的压缩与存储;同时还能够在用户与 LLM 对话时提供更高的准确度。 + > 如需了解更多,请参考[《Embedding 技术与 Dify》](https://mp.weixin.qq.com/s/vmY\_CUmETo2IpEBf1nEGLQ)。 + 高质量索引方式提供向量检索、全文检索和混合检索三种检索设置。关于更多检索设置的说明,请阅读 [检索设置](create-knowledge-and-upload-documents.md#id-4-jian-suo-she-zhi)。 + + + + + + + 使用离线的向量引擎与关键词索引方式,降低了准确度但无需额外花费 Token,产生费用。检索方式仅提供倒排索引,详细说明请阅读[下文](create-knowledge-and-upload-documents.md#dao-pai-suo-yin)。 + + + + + + + +*** + +## 4 检索设置 + +在**高质量索引方式**下,Dify 提供以下 3 种检索方案: + +* **向量检索** +* **全文检索** +* **混合检索** + + + + + + **定义:** 向量化用户输入的问题并生成查询向量,比较查询向量与知识库内对应的文本向量距离,寻找最近的分段内容。 + + + + + + **向量检索设置:** + + **Rerank 模型:** 使用第三方 Rerank 模型对向量检索召回后的分段再一次进行语义重排序,优化排序结果。在“模型供应商”页面配置 Rerank 模型的 API 秘钥之后,在检索设置中打开“Rerank 模型”。 + + **TopK:** 用于筛选与用户问题相似度最高的文本片段。系统同时会根据选用模型上下文窗口大小动态调整片段数量。默认值为 3,数值越高,预期被召回的文本分段数量越多。 + + **Score 阈值:** 用于设置文本片段筛选的相似度阈值,只召回超过设置分数的文本片段,默认值为 0.5。数值越高说明对于文本与问题要求的相似度越高,预期被召回的文本数量也越少。 + + > TopK 和 Score 设置仅在 Rerank 步骤生效,因此需要添加并开启 Rerank 模型才能应用两者中的设置。 + + + + + + **定义:** 关键词检索,即索引文档中的所有词汇。用户输入问题后,通过明文关键词匹配知识库内对应的文本片段,返回符合关键词的文本片段;类似搜索引擎中的明文检索。 + + + + + + **Rerank 模型:** 使用第三方 Rerank 模型对全文检索召回后的分段再一次进行语义重排序,优化排序结果。在“模型供应商”页面配置 Rerank 模型的 API 秘钥之后,在检索设置中打开“Rerank 模型”。 + + **TopK:** 用于筛选与用户问题相似度最高的文本片段。系统同时会根据选用模型上下文窗口大小动态调整片段数量。系统默认值为 3 。数值越高,预期被召回的文本分段数量越多。 + + **Score 阈值:** 用于设置文本片段筛选的相似度阈值,只召回超过设置分数的文本片段,默认值为 0.5。数值越高说明对于文本与问题要求的相似度越高,预期被召回的文本数量也越少。 + + > TopK 和 Score 设置仅在 Rerank 步骤生效,因此需要添加并开启 Rerank 模型才能应用两者中的设置。 + + + + + **定义:** 同时执行全文检索和向量检索,并应用重排序步骤,从两类查询结果中选择匹配用户问题的最佳结果。在此模式下可以指定“权重设置”(无需配置 Rerank 模型 API)或选择 Rerank 模型进行检索。 + + + + + + 在混合检索设置内可以选择启用**“权重设置”**或**“Rerank 模型”**。 + + **权重设置:** 允许用户赋予语义优先和关键词优先自定义的权重。关键词检索指的是在知识库内进行全文检索(Full Text Search),语义检索指的是在知识库内进行向量检索(Vector Search)。 + + * **语义值为 1** + + 仅启用语义检索模式。借助 Embedding 模型,即便知识库中没有出现查询中的确切词汇,也能通过计算向量距离的方式提高搜索的深度,返回正确内容。此外,当需要处理多语言内容时,语义检索能够捕捉不同语言之间的意义转换,提供更加准确的跨语言搜索结果。 + + > 语义检索指的是比对用户问题与知识库内容中的向量距离。距离越近,匹配的概率越大。参考阅读:[《Dify:Embedding 技术与 Dify 数据集设计/规划》](https://mp.weixin.qq.com/s/vmY\_CUmETo2IpEBf1nEGLQ)。 + + * **关键词值为 1** + + 仅启用关键词检索模式。通过用户输入的信息文本在知识库全文匹配,适用于用户知道确切的信息或术语的场景。该方法所消耗的计算资源较低,适合在大量文档的知识库内快速检索。 + + * **自定义关键词和语义权重** + + 除了仅启用语义检索或关键词检索模式,我们还提供了灵活的自定义权重设置。你可以通过不断调试二者的权重,找到符合业务场景的最佳权重比例。 + + *** + + **Rerank 模型:** 你可以在“模型供应商”页面配置 Rerank 模型的 API 秘钥之后,在检索设置中打开“Rerank 模型”,系统会在混合检索后对已召回的文档结果再一次进行语义重排序,优化排序结果。 + + *** + + **“权重设置”** 和 **“Rerank 模型”** 设置内支持启用以下选项: + + **TopK:** 用于筛选与用户问题相似度最高的文本片段。系统同时会根据选用模型上下文窗口大小动态调整片段数量。系统默认值为 3 。数值越高,预期被召回的文本分段数量越多。 + + **Score 阈值:**用于设置文本片段筛选的相似度阈值,即:只召回超过设置分数的文本片段。系统默认关闭该设置,即不会对召回的文本片段相似值过滤。打开后默认值为 0.5。数值越高,预期被召回的文本数量越少。 + + + + + +*** + +在**经济索引方式**下,Dify 仅提供 1 种检索设置: + +#### **倒排索引** + +倒排索引是一种用于快速检索文档中关键词的索引结构,它的基本原理是将文档中的关键词映射到包含这些关键词的文档列表,从而提高搜索效率。具体原理请参考[《倒排索引》](https://zh.wikipedia.org/wiki/%E5%80%92%E6%8E%92%E7%B4%A2%E5%BC%95)。 + +**TopK:** 用于筛选与用户问题相似度最高的文本片段。系统同时会根据选用模型上下文窗口大小动态调整片段数量。系统默认值为 3 。数值越高,预期被召回的文本分段数量越多。 + + + + + +指定检索设置后,你可以参考[召回测试/引用归属](/zh-cn/user-guide/knowledge-base/retrieval-test-and-citation)查看关键词与内容块的匹配情况。 + +## 5 完成上传 + +配置完上文所述的各项配置后,轻点“保存并处理”即可完成知识库的创建。你可以参考 [在应用内集成知识库](integrate-knowledge-within-application.md),搭建出能够基于知识库进行问答的 LLM 应用。 + +*** + +## 参考阅读 + +#### ETL + +在 RAG 的生产级应用中,为了获得更好的数据召回效果,需要对多源数据进行预处理和清洗,即 ETL (_extract, transform, load_)。为了增强非结构化/半结构化数据的预处理能力,Dify 支持了可选的 ETL 方案:**Dify ETL** 和[ ](https://docs.unstructured.io/welcome)[**Unstructured ETL** ](https://unstructured.io/)。 + +> Unstructured 能够高效地提取并转换您的数据为干净的数据用于后续的步骤。Dify 各版本的 ETL 方案选择: + +文件解析支持格式的差异: + +| DIFY ETL | Unstructured ETL | +| ---------------------------------------------- | ------------------------------------------------------------------------ | +| txt、markdown、md、pdf、html、htm、xlsx、xls、docx、csv | txt、markdown、md、pdf、html、htm、xlsx、xls、docx、csv、eml、msg、pptx、ppt、xml、epub | + +不同的 ETL 方案在文件提取效果的方面也会存在差异,想了解更多关于 Unstructured ETL 的数据处理方式,请参考[官方文档](https://docs.unstructured.io/open-source/core-functionality/partitioning)。 + +**Embedding 模型** + +**Embedding 嵌入**是一种将离散型变量(如单词、句子或者整个文档)转化为连续的向量表示的技术。它可以将高维数据(如单词、短语或图像)映射到低维空间,提供一种紧凑且有效的表示方式。这种表示不仅减少了数据的维度,还保留了重要的语义信息,使得后续的内容检索更加高效。 + +**Embedding 模型**是一种专门用于将文本向量化的大语言模型,它擅长将文本转换为密集的数值向量,有效捕捉语义信息。 + +> 如需了解更多,请参考:[《Dify:Embedding 技术与 Dify 数据集设计/规划》](https://mp.weixin.qq.com/s/vmY\_CUmETo2IpEBf1nEGLQ)。 diff --git a/ja-jp/guides/knowledge-base/metadata.mdx b/ja-jp/guides/knowledge-base/metadata.mdx new file mode 100644 index 00000000..39823071 --- /dev/null +++ b/ja-jp/guides/knowledge-base/metadata.mdx @@ -0,0 +1,408 @@ +--- +title: 元数据 +--- + +## 什么是元数据? + +### 定义 + +元数据是用于描述其他数据的信息。简单来说,它是"关于数据的数据"。它就像一本书的目录或标签,可以为你介绍数据的内容、来源和用途。 +通过提供数据的上下文,元数据能帮助你在知识库内快速查找和管理数据。 + +### 知识库元数据定义 + +- **字段(Field)**:元数据字段是用于描述文档特定属性的标识项,每个字段代表文档的某个特征或信息。例如"author""language"等。 + +- **字段值(Value)**:字段值是该字段的具体信息或属性,例如"Jack""English"。 + +field_name_and_value + +- **字段值计数(Value Count)**:字段值计数是指在某条元数据字段中标记的字段值数量,包括重复项。例如,此处的"3"是字段值计数,指该字段中有 3 个独特的字段值。 + + + +- **值类型(Value Type)**:值类型指字段值的类型。 + - 目前,Dify 的元数据功能支持以下三种值类型: + - **字符串**(String):文本值。 + - **数字**(Number):数值。 + - **时间**(Time):日期和时间。 + +value_type + +## 如何管理知识库元数据? + +### 管理知识库元数据字段 + +在知识库管理界面,你可以创建、修改和删除元数据字段。 + +> 注意:所有在此界面进行的更新均为**全局更新**,这意味着对元数据字段列表的任何更改都会影响整个知识库,包括所有文档中标记的元数据。 + +#### 元数据管理界面简介 + +**进入元数据管理界面** + +在知识库管理界面,点击右上方的 **元数据** 按钮,进入元数据管理界面。 + +![metadata_entrance](https://assets-docs.dify.ai/2025/03/bd43305d49cc1511683b4a098c8f6e5a.png) + +![metadata_panel](https://assets-docs.dify.ai/2025/03/6000c85b5d2e29a2a5af5e0a047a7a59.png) + +**知识库元数据字段的类型** + +在知识库中,元数据字段分为两类:**内置元数据(Built-in)** 和 **自定义元数据**。 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
内置元数据(Built-in)自定义元数据
显示位置知识库界面 元数据 栏的下半部分。知识库界面 元数据 栏的上半部分。
启用方式默认禁用,需要手动开启才能生效。由用户根据需求自由添加。
生成方式启用后,由系统自动提取相关信息并生成字段值。用户手动添加,完全由用户自定义。
修改权限一旦生成,无法修改字段与字段值。可以删除或编辑字段名称,也可以修改字段值。
应用范围启用后,适用于已上传和新上传的所有文档。添加元数据字段后,字段会储存在知识库的元数据列表中/需要手动设置,才能将该字段应用于具体文档。
字段 + 由系统预定义,包括: +
    +
  • document_name (string):文件名
  • +
  • uploader (string):上传者
  • +
  • upload_date (time):上传日期
  • +
  • last_update_date (time):最后更新时间
  • +
  • source (string):文件来源
  • +
+
在初始状态下,知识库无自定义元数据字段,需要用户手动添加。
字段值类型 +
    +
  • 字符串 (string):文本值
  • +
  • 数字 (number):数值
  • +
  • 时间 (time):日期和时间
  • +
+
+
    +
  • 字符串 (string):文本值
  • +
  • 数字 (number):数值
  • +
  • 时间 (time):日期和时间
  • +
+
+ +#### 新建元数据字段 + +1. 点击 **+添加元数据** 按钮,弹出 **新建元数据** 弹窗。 + +![new_metadata](https://assets-docs.dify.ai/2025/03/5086db42c40be64e54926b645c38c9a0.png) + +2. 在 **字段值类型** 中选择元数据字段的值类型。 + +3. 在 **名称** 框中填写字段的名称。 + +> 字段名仅支持小写字母、数字和下划线(_)字符,不支持空格和大写字母。 + +value_type + +4. 点击 **保存** 按钮,保存字段。 + +![save_field](https://assets-docs.dify.ai/2025/03/f44114cc58d4ba11ba60adb2d04c9b4c.png) + +#### 修改元数据字段 + +1. 点击单条元数据字段右侧的编辑按钮,弹出 **重命名** 弹窗。 + +![rename_field_1](https://assets-docs.dify.ai/2025/03/94327185cbe366bf99221abf2f5ef55a.png) + +2. 在 **名称** 框中修改字段名称。 + +> 此弹窗仅支持修改字段名称,不支持修改字段值类型。 + +rename_field_2 + +3. 点击 **保存** 按钮,保存修改后的字段。 + +> 修改并保存后,该字段将在知识库中的所有相关文档中同步更新。 + +![same_renamed_field](https://assets-docs.dify.ai/2025/03/022e42c170b40c35622b9b156c8cc159.png) + +#### 删除元数据字段 + +点击单条元数据字段右侧的删除按钮,可以删除该字段。 + +> 如果删除单条字段,该字段及该字段下包含的字段值将从知识库的所有文档中删除。 + +![delete_field](https://assets-docs.dify.ai/2025/03/022e42c170b40c35622b9b156c8cc159.png) + +### 编辑文档元数据信息 + +#### 批量编辑文档元数据信息 + +你可以在知识库管理界面批量编辑文档的元数据信息。 + +**打开编辑元数据弹窗** + +1. 打开知识库管理界面,在文档列表左侧的白色方框中勾选你希望批量操作的文档。勾选后,页面下方会弹出操作选项。 + +![edit_metadata_entrance](https://assets-docs.dify.ai/2025/03/18b0c435604db6173acba41662474446.png) + +2. 点击操作选项中的 **元数据**,弹出 **编辑元数据** 弹窗。 + +![edit_metadata](https://assets-docs.dify.ai/2025/03/719f3c31498f23747fed7d7349fd64ba.png) + +**批量新增元数据信息** + +1. 在 **编辑元数据** 弹窗中点击底部的 **+添加元数据** 按钮,弹出操作弹窗。 + +add_metadata + +- 如需为选中文档添加已创建的字段: + + - 可以从下拉列表中选择已有的字段,添加到字段列表中。 + + - 可以在 **搜索元数据** 搜索框中搜索你需要的字段,添加到该文档的字段列表中。 + +existing_field + +- 如需为选中文档新建字段,可以点击弹窗左下角的 **+新建元数据** 按钮,并参考前文的 **新建元数据字段** 章节新建字段。 + + > 在 **+新建元数据** 弹窗中新建的元数据字段,将自动同步至知识库字段列表中。 + +new_metadata_field + +- 如需管理已创建的字段,可以点击该弹窗右下角的 **管理** 按钮,跳转到知识库的管理界面。 + +manage_field + +2. *(可选)* 新增字段后,在字段值框内填写该字段相应的字段值。 + +value_for_field + +- 如果值类型为 **时间**,在填写字段值时会弹出时间选择器,供你选择具体时间。 + +date_picker + +3. 点击 **保存** 按钮,保存操作。 + +**批量删改元数据信息** + +1. 在 **编辑元数据** 弹窗中删改元数据信息: + +- **添加字段值**: 在需要添加元数据值的字段框内直接输入所需值。 + +- **重置字段值**: 将光标悬停在字段名左侧的蓝色圆点上,蓝点将变为 **重置** 按钮。点击蓝点,将字段框内修改后的内容重置为原始元数据值。 + +reset_values + +- **删除字段值**: + + - 删除一个字段值:在需要删除字段值的字段框内直接删除该字段值。 + + - 删除多个字段值:点击 **多个值** 卡片的删除图标,清空所有选中文档的该元数据字段的值。 + +multiple_values + +- **删除单条元数据字段**: 点击字段最右侧的删除符号,删除该字段。删除后,该字段会被横线划掉且置灰。 + + > 此操作仅会删除已选文档的该字段与字段值,字段本身依然保留在知识库中。 + +delete_fields + +2. 点击 **保存** 按钮,保存操作。 + +**调整批量操作的应用范围** + +- **调整批量操作的应用范围**: 你可以使用 **编辑元数据** 弹窗左下角的 **应用于所有文档** 选框来调整编辑模式中改动的应用范围。 + + - **否(默认)**: 如果不选中该选项,编辑模式中的改动仅对原本已有该元数据字段的文档生效,其他文档不会受到影响。 + + - **是**: 如果选中该选项,编辑模式中的改动会对所有选中的文档生效。原本没有该字段的文档,会自动添加该字段。 + +apply_all_changes + +#### 编辑单篇文档元数据信息 + +你可以在文档详情界面中编辑单篇文档的元数据信息。 + +**进入文档元数据编辑模式** + +1. 在文档详情界面,点击信息栏上方的 **开始标记** 按钮。 + +![details_page](https://assets-docs.dify.ai/2025/03/066cb8eaa89f6ec17aacd8b09f06771c.png) + +2. 进入文档元数据编辑模式。 + +![start_labeling](https://assets-docs.dify.ai/2025/03/4806c56e324589e1711c407f6a1443de.png) + +**新增文档元数据信息** + +1. 在文档的元数据编辑模式中,点击 **+添加元数据** 按钮,弹出操作弹窗。 +![add_metadata](https://assets-docs.dify.ai/2025/03/f9ba9b10bbcf6eaca787eed4fcde44da.png) + +- 如需使用新建字段为该文档标记字段值,可以点击弹窗左下角的 **+ 新建元数据** 按钮,并参考前文的 **新建元数据字段** 章节新建字段。 + + > 在文档页面新建的元数据字段,将自动同步至知识库字段列表中。 + + ![new_fields](https://assets-docs.dify.ai/2025/03/739e7e51436259fca45d16065509fabb.png) + +- 如需使用知识库已有的字段为该文档标记字段值,可以选择下列任意一种方式使用已有的字段: + + - 从下拉列表中选择知识库已有的字段,添加到该文档的字段列表中。 + + - 在 **搜索元数据** 搜索框中搜索你需要的字段,添加到该文档的字段列表中。 + + ![existing_fields](https://assets-docs.dify.ai/2025/03/5b1876e8bc2c880b3b774c97eba371ab.png) + +- 如需管理知识库已有的字段,可以点击弹窗右下角的 **管理** 按钮,跳转到知识库的管理界面。 + + ![manage_metadata](https://assets-docs.dify.ai/2025/03/8dc74a1d2cdd87294e58dbc3d6dd161b.png) + +2. *(可选)* 添加字段后,在字段名右侧的元数据栏中填写字段值。 + +![values_for_fields](https://assets-docs.dify.ai/2025/03/488107cbea73fd4583e043234fe2fd2e.png) + +3. 点击右上角的 **保存** 按钮,保存字段值。 + +**删改文档元数据信息** + +1. 在文档的元数据编辑模式中,点击右上角的 **编辑** 按钮,进入编辑模式。 + +![edit_mode](https://assets-docs.dify.ai/2025/03/bb33a0f9c6980300c0f979f8dc0d274d.png) + +2. 删改文档元数据信息: + - **删改字段值**: 在字段名右侧的字段值框内,删除或修改字段值。 + + > 此模式仅支持修改字段值,不支持修改字段名。 + + - **删除字段**: 点击字段值框右侧的删除按钮,删除字段。 + + > 此操作仅会删除该文档的该字段与字段值,字段本身依然保留在知识库中。 + +![edit_metadata](https://assets-docs.dify.ai/2025/03/4c0c4d83d3ad240568f316abfccc9c2c.png) + +3. 点击右上角的 **保存** 按钮,保存修改后的字段信息。 + +## 如何使用元数据功能在知识库中筛选文档? + +请参阅 [在应用内集成知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/integrate-knowledge-within-application) 中的 **使用元数据筛选知识** 章节。 + +## API 信息 + +请参阅 [通过 API 维护知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api)。 + +## FAQ + +- **元数据有什么作用?** + + - 提升搜索效率:用户可以根据元数据标签快速筛选和查找相关信息,节省时间并提高工作效率。 + + - 增强数据安全性:通过元数据设置访问权限,确保只有授权用户能访问敏感信息,保障数据的安全性。 + + - 优化数据管理能力:元数据帮助企业或组织有效分类和存储数据,提高数据的管理和检索能力,增强数据的可用性和一致性。 + + - 支持自动化流程:元数据在文档管理、数据分析等场景中可以自动触发任务或操作,简化流程并提高整体效率。 + +- **知识库元数据管理列表中的元数据字段和某篇文档中的元数据值有什么区别?** + +| | 定义 | 性质 | 举例 | +| --- | --- | --- | --- | +| 元数据管理列表中的元数据字段 | 预定义的字段,用于描述文档的某些属性。 | 全局性字段。所有文档都可以使用这些字段。 | 作者、文档类型、上传日期。 | +| 某篇文档中的元数据值 | 每个文档按需标记的针对特定文档的信息。 | 文档特定的值。每个文档根据其内容会标记不同的元数据值。 | 文档 A 的"作者"字段值为"张三",文档 B 的"作者"字段值为"李四"。 | + +- **"在知识库管理界面删除某条元数据字段""在编辑元数据弹窗中删除已选文档的某条元数据字段"和"在文档详情界面删除某条元数据字段"有什么区别?** + +| 操作方式 | 操作方法 | 影响范围 | 结果 | +| --- | --- | --- | --- | +| 在知识库管理界面删除某条元数据字段 | 在知识库管理界面,点击某条元数据字段右侧的删除图标,删除该字段。 | 从知识库管理列表中完全删除该元数据字段及其所有字段值。 | 该字段从知识库中移除,所有文档中的该字段及包含的所有字段值也会消失。 | +| 在编辑元数据弹窗中删除已选文档的某条元数据字段 | 在编辑元数据弹窗中,点击某条元数据字段右侧的删除图标,删除该字段。 | 仅删除已选文档的该字段与字段值,字段本身依然保留在知识库管理列表中。 | 选中文档中的字段与字段值被移除,但字段仍保留在知识库内,字段值计数会发生数值上的变化。 | +| 在文档详情界面删除某条元数据字段 | 在文档详情界面中的元数据编辑模式里,点击某条元数据字段右侧的删除图标,删除该字段。 | 仅删除该文档的该字段与字段值,字段本身依然保留在知识库管理列表中。 | 该文档中的字段与字段值被移除,但字段仍保留在知识库内,字段值计数会发生数值上的变化。 | diff --git a/ja-jp/guides/knowledge-base/readme.mdx b/ja-jp/guides/knowledge-base/readme.mdx new file mode 100644 index 00000000..472d32ad --- /dev/null +++ b/ja-jp/guides/knowledge-base/readme.mdx @@ -0,0 +1,39 @@ +--- +title: ナレッジベース +--- + +Difyプラットフォームでは、RAG(検索強化生成)ソリューションを通じて、ナレッジベースをよりアクセスしやすい形で提供します。開発者は企業の内部文書、FAQ、規格情報などをナレッジベースにアップロードし、整理することが可能で、これらはその後、大規模言語モデル(LLM)が問い合わせる際の情報源として利用されます。これにより、AIの大規模モデルが当初学習した静的なデータに依存する代わりに、ナレッジベースの内容をリアルタイムで更新し、情報が古くなることや欠けることによる問題を防ぐことができます。 + +ユーザーからの質問を受けたLLMは、まずナレッジベース内の内容をキーワードに基づいて検索します。これにより、関連性の高いコンテンツが選択され、LLMがより正確な答えを出すための重要な文脈を提供します。 + +この手法により、開発者はLLMが既存の訓練データに頼るだけでなく、リアルタイムの文書やデータベースからの最新情報を扱うことが可能となり、答えの正確性と関連性が向上します。 + +**Difyの主な利点**: + +* リアルタイム更新:ナレッジベースの内容はいつでも最新のものに更新することができ、モデルが最新情報を得られるようにします。 + +* 高精度:関連する文書を検索することで、LLMは実際の内容に基づき高品質な回答を生み出すことができ、誤情報を減らします。 + +* 柔軟性:開発者はナレッジベースの内容をカスタマイズでき、実際のニーズに合わせて知識の範囲を調整できます。 + +ナレッジベース機能はRAGパイプラインの各段階を可視化し、ユーザーが個人またはチームのナレッジベースを管理しやすくするシンプルで使いやすいユーザーインターフェースを提供します。また、これを迅速にAIアプリケーションに統合することができます。準備するのは以下のようなテキストコンテンツだけです: + +* 長文コンテンツ(TXT、Markdown、DOCX、HTML、JSON、さらにはPDF) +* 構造化データ(CSV、Excelなど) +* オンラインデータソース(ウェブサイトからの情報収集、Notionからのデータ取得など) + +ファイルを「ナレッジベース」にアップロードすることで、データの自動処理が行われます。 + +> もし既に独自のナレッジベースを持っている場合は、それをDifyに接続することで、外部のナレッジベースとの連携を確立できます。 + +![ナレッジベースを作る](https://assets-docs.dify.ai/2024/12/081d73351099a73061fc201194fd2c0a.png) + +### 使用案例 + +例えば、既存のナレッジベースや製品のドキュメントを利用してAIカスタマーサポートアシスタントを開発したい場合、Difyを用いると、ドキュメントをナレッジベースにアップロードし、対話型アプリケーションを簡単に作成できます。従来の手法では、テキストデータからAIカスタマーサポートアシスタントを開発するまで数週間を要し、継続的なメンテナンスや効果的な更新作業が難しいことがありました。しかし、Difyを使用すると、このプロセスをわずか3分で完了させ、ユーザーからのフィードバック収集を始めることができます。 + +### ナレッジベースとドキュメント + +Difyでのナレッジベースは、複数のドキュメント(Documents)から構成され、一つのドキュメントは複数のコンテンツブロック(Chunk)を含むことがあります。このナレッジベースは、アプリケーション全体で検索の対象として統合することが可能です。ドキュメントは、開発者や運営スタッフによってアップロードされるか、他のデータソースから同期されます。 + +独自のドキュメントライブラリを構築している場合、Difyの[外部ナレッジベース機能](./connect-external-knowledge-base.md)を利用して、自身のナレッジベースをDifyプラットフォームにリンクさせることができます。これにより、Difyプラットフォーム内で内容を再度アップロードすることなく、大規模な言語モデルがリアルタイムで独自のナレッジベースの内容を参照することが可能になります。 \ No newline at end of file diff --git a/ja-jp/guides/knowledge-base/retrieval-test-and-citation.mdx b/ja-jp/guides/knowledge-base/retrieval-test-and-citation.mdx new file mode 100644 index 00000000..1e48403a --- /dev/null +++ b/ja-jp/guides/knowledge-base/retrieval-test-and-citation.mdx @@ -0,0 +1,74 @@ +--- +title: 召回测试/引用归属 +version: '简体中文' +--- + +### 1 召回测试 + +Dify 知识库内提供了文本召回测试的功能,用于模拟用户输入关键词后调用知识库内容区块。召回的区块将按照分数高低进行排序并发送至 LLM。一般而言,问题与内容块的匹配度越高,LLM 所输出的答案也就更加贴近源文档,文本“训练效果”越好。 + +你可以使用不同的检索方式及参数配置,查看召回的内容区块质量与效果。不同的知识库分段模式对应不同的召回测试方法。 + + + + 在 **源文本** 输入框输入常见的用户问题,点击 **测试** 按钮即可在右侧的 **召回段落** 内查看召回结果。 + + 在通用模式下,内容区块相互独立;内容块右上角的分数为内容与关键词的匹配分数。得分越高,说明问题关键词与内容块的的匹配度越高。 + + ![通用模式 - 召回内容块](https://assets-docs.dify.ai/2024/12/806967bb36e74fc744b34887cd3ebe52.png) + + 轻点内容块即可查看所引用的内容详情。每个内容块底部将展示所引用的文档信息源,你可以借此判断该内容分段是否合理。 + + ![查看召回内容详情](https://assets-docs.dify.ai/2024/12/419ac78ad21ea198b08f89c4f5fde485.png) + + + 在 **源文本** 输入框输入常见的用户问题,点击 **测试** 并在右侧的 **召回段落** 查看召回结果。在父子分段模式下,问题的关键词将命中子分段中的内容块,以取得更加精准的匹配效果。区块右上角的得分指的是子区块与关键词之间的匹配得分。 + + 你可以在预览区内查看具体的命中段落内容;匹配后将召回子分段所在父分段的完整上下文,向 AI 应用提供更加完整的信息。 + + ![召回测试 - 父子分段模式](https://assets-docs.dify.ai/2024/12/6f0b99f97b138805bf4665d0c5c16f26.png) + + 每个内容块底部将展示所引用的文档源,通常是文档中的某个段落或句子。轻点引用源右侧的“打开”按钮即可查看被引用的内容。 + + 分段详情页的左侧为父分段信息,右侧为被命中的子分段。关键词可能命中多个子分段,同时在开头显示与关键词的匹配分数。你可以基于详情信息判断当前的内容分段是否合理。 + + ![查看召回内容详情](https://assets-docs.dify.ai/2024/12/22103227f8a25069d147160254f69512.png) + + + +在 **记录** 内可以查看到历史的查询记录;若知识库已关联至应用内,由应用内发起的知识库查询记录也可以在此查看。 + +### 修改文本检索方式 + +点击源文本输入框右上角的图标即可更换当前知识库的检索方式与具体参数,保存之后仅在当前召回测试的调试过程中生效,你可以借此比较不同检索设置的效果。如果你想要修改当前知识库的检索方式,前往“知识库设置” > “检索设置”中进行设置。 + +![](https://assets-docs.dify.ai/2024/12/86b78cb114a843c9dedcba1fe12e3b02.png) + +**召回测试建议步骤:** + +1. 设计和整理能够覆盖用户常见问题的测试用例/测试问题集/指引内容; +2. 根据内容特点和使用场景(是否为问答内容、是否涉及多语言问答等),选择合适的检索策略。 +3. 调整召回分段数量(TopK)和召回分数阈值(Score),根据实际的应用场景、包括文档本身的质量来选择合适的参数组合。 + +**TopK 值和召回阈值(Score )如何配置** + +* **TopK 代表按相似分数倒排时召回分段的最大个数**。TopK 值调小,将会召回更少分段,可能导致召回的相关文本不全;TopK 值调大,将召回更多分段,可能导致召回语义相关性较低的分段使得 LLM 回复质量降低。 +* **召回阈值(Score)代表允许召回分段的最低相似分数。** 召回分数调小,将会召回更多分段,可能导致召回相关度较低的分段;召回分数阈值调大,将会召回更少分段,过大时将会导致丢失相关分段。 + +*** + +### 2 引用与归属 + +在应用内的“上下文”添加知识库后,可以在 **“添加功能”** 内开启 **“引用与归属”**。在应用内输入问题后,若涉及已关联的知识库文档,将标注内容的引用来源。你可以通过此方式检查知识库所召回的内容分段是否符合预期。 + +![](https://assets-docs.dify.ai/2025/03/a2e4db6635634e99a9aeb91341b53d1c.png) + +开启功能后,当 LLM 引用知识库内容来回答问题时,可以在回复内容下面查看到具体的引用段落信息,包括**原始分段文本、分段序号、匹配度**等。点击引用分段上方的 **跳转至知识库**,可以快捷访问该分段所在的知识库分段列表,方便开发者进行调试编辑。 + +![](https://assets-docs.dify.ai/2025/03/1d82a0e1032f97e5832ed2ca7cb99fb2.png) + +### 查看知识库内已关联的应用 + +知识库将会在左侧信息栏中显示已关联的应用数量。将鼠标悬停至圆形信息图标时将显示所有已关联的 Apps 列表,点击右侧的跳转按钮即可快速查看对应的应用。 + +![查看知识库内已关联的应用](https://assets-docs.dify.ai/2024/12/28899b9b0eba8996f364fb74e5b94c7f.png) diff --git a/ja-jp/guides/management/README.mdx b/ja-jp/guides/management/README.mdx new file mode 100644 index 00000000..80b3efc4 --- /dev/null +++ b/ja-jp/guides/management/README.mdx @@ -0,0 +1,5 @@ +--- +title: 管理 +--- + + diff --git a/ja-jp/guides/management/app-management.mdx b/ja-jp/guides/management/app-management.mdx new file mode 100644 index 00000000..dc146a95 --- /dev/null +++ b/ja-jp/guides/management/app-management.mdx @@ -0,0 +1,55 @@ +--- +title: アプリの管理 +--- + + +### アプリ情報の編集 + +アプリを作成した後に、アプリ名や説明を変更したい場合は、アプリの左上隅にある「情報の編集」をクリックしてください。これにより、アプリのアイコン、名前、または説明を修正できます。 + +![アプリ情報の編集](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/63a449e9a8ae337b9c067165d1674a45.png) + +### アプリの複製 + +すべてのアプリは複製が可能です。アプリの左上隅にある「複製」をクリックしてください。 + +### ワークフローのオーケストレーションに切り替える + +TODO 🚧 + +### アプリのエクスポート + +Difyで作成されたアプリはDSL形式でエクスポートをサポートしており、設定ファイルを任意のDifyチームに自由にインポートできます。 DSLファイルは次の2つの方法でエクスポートできます: + +* シナリオページ中のアプリカードの右下隅の"DSLをエクスポート"をクリックする。 +* アプリ内のオーケストレートページに入れるあど、左上隅の"DSLをエクスポート"のボタンをクリックする。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/544c18d770e230db93d6756bba98d8a7.png) + +DSLファイルは以下の機密情報を含まれません: + +* APIキーなどの第三者ツールの認証情報 +* 環境変数に`Secret`が含まれる場合、DSLをエクスポートするときに機密情報のエクスポートを許可するかどうかを尋ねるメッセージが表示されます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/25ce002ef7f0392fc6b3b6975ae137ec.png) + + +Dify DSLは、Dify.AIによってv0.6以降で定義されたAIアプリエンジニアリングファイル標準です。ファイル形式はYMLで、アプリの基本的な説明、モデルパラメータ、オーケストレーション構成などをカバーしています。 + + +### アプリケーションのインポート + +Difyアプリケーションをインポートする際は、まずDSLファイルをDifyプラットフォームにアップロードしてください。インポート中にバージョンチェックが行われ、DSLファイルのバージョンが古い場合には警告が表示されます。 + +- SaaSユーザーの場合、SaaSプラットフォームからエクスポートされるDSLファイルは常に最新のバージョンです。 +- コミュニティユーザーの場合は、[Difyのアップグレード](https://docs.dify.ai/ja-jp/getting-started/install-self-hosted/docker-compose#upgrade-dify)を参照して、コミュニティエディションを最新に更新し、最新のDSLファイルをエクスポートすることをお勧めします。これにより、互換性の問題を防ぐことができます。 + +![](https://assets-docs.dify.ai/2024/11/487d2c1cc8b86666feb35ea8a346c053.png) + +### アプリの削除 + +アプリを削除したい場合は、アプリの左上隅にある「削除」をクリックしてください。 + + +⚠️アプリの削除は取り消すことができません。すべてのユーザーがあなたのアプリにアクセスできなくなり、アプリ内のすべてのプロンプト、オーケストレーション構成、ログが削除されます。 + diff --git a/ja-jp/guides/management/personal-account-management.mdx b/ja-jp/guides/management/personal-account-management.mdx new file mode 100644 index 00000000..6f8207a6 --- /dev/null +++ b/ja-jp/guides/management/personal-account-management.mdx @@ -0,0 +1,105 @@ +--- +title: 個人アカウントの管理 +--- + + +## 各バージョンでのログイン方法 + +Difyのバージョンによってサポートされるログイン方法は以下のようになっています: + + + + + + + + + + + + + + + + + + +
バージョンログイン方法
コミュニティ版メールアドレスとパスワード
クラウド版GitHubアカウント認証、Googleアカウント認証、メールアドレスと認証コード
+> 注意点:Difyのクラウドサービスでは、GitHubやGoogleアカウントに紐づくメールアドレスが、認証コードでログインする際のメールアドレスと一致している場合、システムが自動的にそれらを同一のアカウントとみなして紐づけます。これにより、手動でのアカウント連携を省略し、重複したアカウント作成を防ぎます。 + +![ログイン方法の図解](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/c4a9bb46f636807f0b59710724fddc40.png) + +## 個人情報の変更 + +個人アカウント情報を更新するには、以下の手順に従ってください: + +1. Difyチームのホームページにアクセスします。 +2. 右上隅のアバターをクリックします。 +3. **「アカウント」**を選択します。 + +次の詳細を変更できます: + +* アバター +* ユーザー名 +* メールアドレス +* パスワード + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/40ea6368cd22021a3f1627738e40f597.png) + +### ログイン方法 + +Difyでは、3つのログイン方法を提供しています。それは、メールアドレスと認証コードによるログイン、Googleアカウントでの認証、そしてGitHubアカウントでの認証です。1つのDifyアカウントで、直接メールアドレスと認証コードを使ってログインすることも、同じメールアドレスを使用しているGoogleやGitHubのアカウントを通じてログインすることもできます。この場合、追加でアカウントを紐付ける必要はありません。 + +### 表示言語の変更 + +表示言語を変更するには、Difyチームのホームページで右上隅のアバターをクリックし、**「言語」**を選択します。Difyは以下の言語をサポートしています: + +* 英語 +* 中国語(簡体字) +* 中国語(繁体字) +* ポルトガル語(ブラジル) +* フランス語(フランス) +* 日本語(日本) +* 韓国語(韓国) +* ロシア語(ロシア) +* イタリア語(イタリア) +* タイ語(タイ) +* インドネシア語 +* ウクライナ語(ウクライナ) + +Difyはコミュニティのボランティアによる追加の言語バージョンの提供を歓迎しています。貢献をご希望の方は、[GitHubリポジトリ](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)をご覧ください。 + +### アカウントにリンクされたアプリを確認 + +**アカウント**ページで、現在のアカウントにリンクされているアプリを確認できます。 + +### 個人アカウントの削除 + +⚠️ 危険な操作です。慎重に進めてください。 + +DifyのSaaS版のアカウントの削除を実行するには、右上隅にあるあなたのアバターをクリックし、ドロップダウンメニューから **「アカウント」** を選択した後、**「アカウントを削除」** ボタンをクリックしてください。 + +アカウントを削除すると、この操作は取り消しできません。同じメールアドレスは30日間再登録できません。アカウントが所有するすべてのワークスペースも削除され、共有ワークスペースからも自動的に削除されます。 + +削除したいアカウントのメールアドレスと、確認用の認証コードを入力する後、システムはアカウントに関連するすべての情報を完全に削除します。 + +![個人アカウントを削除する](https://assets-docs.dify.ai/2024/12/ded326f27886b5884969c220ead998d7.png) + +### FAQ + +**1. アカウントを誤って削除した場合、削除を取り消すことはできますか?** +アカウント削除は取り消しができません。特別な事情がある場合は、削除後20日以内に `support@dify.ai` に連絡し、詳細を説明してください。 + +**2. アカウントを削除した後、チーム内での役割やデータはどうなりますか?** +アカウント削除後: +- **チームオーナー**として作成したワークスペースは解散され、そのワークスペース内のすべてのデータが削除されます。チームメンバーはそのワークスペースへのアクセスを失います。 +- **チームメンバーまたは管理者**として参加していたワークスペースは引き続きデータを保持し、アカウントで作成したアプリデータも保存されます。アカウントはそのワークスペースのメンバーリストから削除されます。 + +**3. アカウントを削除した後、同じメールアドレスで新しいアカウントを再登録できますか?** +アカウント削除後、30日間は同じメールアドレスで新しいアカウントを登録することはできません。 + +**4. アカウントを削除した後、GoogleやGitHubなどのサードパーティサービスの認証は取り消されますか?** +はい、アカウント削除後、GoogleやGitHubなどのすべてのサードパーティサービスの認証が自動的に取り消されます。 + +**5. アカウントを削除した後、Difyのサブスクリプションは自動的にキャンセルされ、払い戻しされますか?** +アカウント削除後、Difyのサブスクリプションは自動的にキャンセルされますが、サブスクリプション料金は払い戻されません。今後の課金は行われません。 diff --git a/ja-jp/guides/management/subscription-management.mdx b/ja-jp/guides/management/subscription-management.mdx new file mode 100644 index 00000000..ec9211fb --- /dev/null +++ b/ja-jp/guides/management/subscription-management.mdx @@ -0,0 +1,101 @@ +--- +title: サブスクリプション管理 +--- + + +### Difyチームサブスクリプションのアップグレード + +チームの所有者や管理者は、Difyチームのサブスクリプションプランをアップグレードできます。Difyチームのホームページ右上にある **「アップグレード」** ボタンをクリックし、希望のプランを選択して支払いを完了することで、サブスクリプションをアップグレードしてください。 + +### Difyチームサブスクリプションの管理 + +Difyの有料サービス(ProfessionalまたはTeamプラン)に加入後、チームの所有者や管理者は **「設定」** → **「請求」** にアクセスし、チームの請求とサブスクリプションの詳細を管理できます。 + +請求ページでは、さまざまなチームリソースの利用状況を確認できます。 + +![チームの請求管理](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/07eb0d03a33e2e01df44dbf3cf241f14.png) + +### よくある質問 + +#### 1. チームプランのアップグレード/ダウングレードやサブスクリプションのキャンセル方法は? + +チームの所有者や管理者は、**設定** → **請求** に移動し、**請求とサブスクリプションの管理** をクリックすることで、サブスクリプションプランを変更できます。 + +* ProfessionalプランからTeamプランにアップグレードする際は、当月の差額を支払う必要があり、すぐに適用されます。 +* TeamプランからProfessionalプランにダウングレードする場合も、即座に適用されます。 + +![有料プランの変更](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/c572ba8806b41eb6564fc658d3d8124b.jpeg) + +サブスクリプションプランをキャンセルすると、**チームは現在の請求サイクルの終了時に自動的にサンドボックス/無料プランに移行**し、その後はサンドボックス/無料プランの制限に従った利用となります。 + +#### 2. サブスクリプションプランをアップグレードした後、チームの利用可能なリソースにどのような変更があるか? + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
リソース無料ProfessionalTeam
チームメンバー数の制限13無制限
アプリ数の制限1050無制限
ベクトル空間の容量5MB200MB1GB
アプリのマーク付き返信1020005000
ナレッジベースへのドキュメントアップロード数505001000
OpenAI会話クォータ200合計月間5000月間10000
+注: + +* 無料プランからProfessionalにアップグレードする場合、すべてのリソースが増加します。 +* ProfessionalからTeamにアップグレードすると、一部のリソースが無制限になるなど、さらなる拡張が行われます。 + +サブスクリプションプランをアップグレードした後は: + +* OpenAI会話クォータは、新しい請求サイクルに合わせてリセットされます。 +* 以前に使用したリソース(例:ベクトル空間の使用量やドキュメントのアップロード)はリセットされず、削除されません。 + +#### 3. サブスクリプションが期限切れになった場合はどうなりますか? + +サブスクリプションが期限切れになると、チームは自動的にサンドボックス/無料プランにダウングレードされ、所有者以外はチームにアクセスできなくなります。また、チーム内の余剰リソース(ドキュメント、ベクトル空間など)もロックされます。 + +#### 4. チーム所有者のアカウントを削除するとチームに影響しますか? + +チームは必ず1人の所有者に紐づいています。チームの所有権が他のメンバーに適切に移転されない場合、チームに関するすべてのデータが削除され、所有者のアカウントと共に消去されます。 + +#### 5. サブスクリプションプラン間の違いは何ですか? + +詳細な機能の比較については、[Difyの価格設定](https://dify.ai/pricing)をご覧ください。 diff --git a/ja-jp/guides/management/team-members-management.mdx b/ja-jp/guides/management/team-members-management.mdx new file mode 100644 index 00000000..c30c61a5 --- /dev/null +++ b/ja-jp/guides/management/team-members-management.mdx @@ -0,0 +1,84 @@ +--- +title: チームメンバーの管理 +--- + + +このガイドでは、Difyチーム内のメンバーを管理する方法について説明します。異なるDifyバージョンにおけるチームメンバーの制限は以下の通りです。 + + + + + + + + + + + + + + + + + + + + +
サンドボックス / 無料プロフェッショナルチームコミュニティエンタープライズ
13無制限無制限無制限
+### メンバーの追加 + + +チームの所有者と管理人のみがチームメンバーを追加する権限を持っています。 + + +メンバーを追加するには、チームの所有者や管理人は右上隅のアバターをクリックし、**"メンバー"** → **"追加"**を選択します。メールアドレスを入力し、メンバー権限を割り当ててプロセスを完了します。 + +![チームメンバーへの権限の割り当て](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/bbd0873959dd3fe342b7212b98e812ae.png) + +> コミュニティエディションでメール機能を有効にするには、チームオーナーがシステムの[環境変数](../../getting-started/install-self-hosted/environments)を設定してメールサービスをオンにする必要があります。 + +- Difyに未登録の招待メンバーには、招待メールが送信されます。メール内のリンクをクリックすることで、登録を完了できます。 +- すでにDifyに登録済みの招待メンバーには、権限が自動で付与され、**招待メールは送信されません**。招待されたメンバーは、右上のメニューから新しいワークスペースに切り替えることができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/93a6f055cfaf65dfe138e8ac332f71d1.png) + +### メンバーの権限 + +チームメンバーは、所有者、管理者、編集者、メンバーに分類されます。 + +* **所有者** + * ロールの説明: チームの最初のメンバーで、最も高いレベルの権限を持ち、チーム全体の運営と管理を担当します。 + * 権限の概要: チームメンバーの管理、メンバー権限の調整、モデルプロバイダーの設定、アプリケーションの作成と削除、ナレッジベースの作成、ツールライブラリの設定などの権限を持ちます。 +* **管理者** + * ロールの説明: チームの管理者で、チームメンバーとモデルプロバイダーの管理を担当します。 + * 権限の概要: メンバー権限を調整することはできませんが、チームメンバーの追加や削除、モデルプロバイダーの設定、アプリケーションの作成、編集、削除、ナレッジベースの作成、ツールライブラリの設定などの権限を持ちます。 +* **編集者** + * ロールの説明: 通常のチームメンバーで、共同でアプリケーションの作成と編集を担当します。 + * 権限の概要: チームメンバーの管理、モデルプロバイダーの設定、ツールライブラリの設定はできません。アプリケーションの作成、編集、削除、ナレッジベースの作成などの権限を持ちます。 +* **メンバー** + * ロールの説明: 通常のチームメンバーで、チーム内で作成されたアプリケーションの閲覧と使用のみが許可されます。 + * 権限の概要: チーム内でのアプリケーションの使用とツールの使用のみが許可されます。 + +### メンバーの削除 + + +チームの所有者のみがチームメンバーを削除する権限を持っています。 + + +メンバーを削除するには、Difyチームのホームページの右上隅のアバターをクリックし、**"設定"** → **"メンバー"**に移動し、削除するメンバーを選択して**"チームから削除"**をクリックします。 + +![メンバーの削除](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/management/0596a58b4fc59c9a0fae24bdff90b769.png) + +### よくある質問 + +#### 1. チームの所有者権を譲渡するにはどうすればよいですか? + +チームの所有者は最も高いレベルの権限を持ちます。チーム構造の安定性を維持するため、一度確立されたチームの所有者権は手動で譲渡することはできません。 + +#### 2. チームを削除するにはどうすればよいですか? + +チームデータのセキュリティ上の理由から、チームの所有者は自分のチームを自分で削除することはできません。 + +#### 3. チームメンバーのアカウントを削除するにはどうすればよいですか? + +チームの所有者も管理者もチームメンバーのアカウントを削除することはできません。アカウントの削除には、アカウントの所有者が積極的にリクエストする必要があり、他者が行うことはできません。アカウントの削除の代替手段として、チームからメンバーを削除することで、そのユーザーのチームへのアクセス権限が取り消されます。 diff --git a/ja-jp/management/version-control.mdx b/ja-jp/guides/management/version-control.mdx similarity index 61% rename from ja-jp/management/version-control.mdx rename to ja-jp/guides/management/version-control.mdx index 91df9860..3a748ab7 100644 --- a/ja-jp/management/version-control.mdx +++ b/ja-jp/guides/management/version-control.mdx @@ -2,6 +2,7 @@ title: バージョン管理 --- + ## はじめに バージョン管理とは、**Difyのチャットフローやワークフロー管理インターフェース**の核となる機能です。この機能により、ユーザーはアプリの複数バージョンを効率的に管理および公開することができます。 @@ -12,21 +13,45 @@ title: バージョン管理 - **下書きバージョン(Current Draft)**: Difyのチャットフローやワークフロー管理インターフェースにおいて、**現在の作業状態を示す唯一のバージョン**です。ユーザーはこのバージョンでチャットフローやワークフローの編集、修正、プレビューを行うことができます。 -current_draft +

+ +

- **公開バージョン(Published Version)**: ユーザーがオンラインに公開したすべてのバージョンの総称です。最新の公開バージョンと過去の公開バージョンの総称です。公開操作を実行するたびに、新しい公開バージョンが生成されます。 - **最新公開バージョン(Latest Version)**: ユーザーが最後にオンラインに公開したバージョンです。Difyのバージョン管理インターフェースでは、これを`Latest`としてマークし、他の過去の公開バージョンと区別しています。 -latest_version +

+ +

- **過去の公開バージョン(Previous Version)**: 以前に公開されたが、現在は最新ではなくなったバージョンを指します。 -prevous_version +

+ +

- **バージョンの復元(Restore)**: バージョン管理の復元機能を使用すると、アプリを特定の過去のバージョンに戻すことができます。 -restore +

+ +

## 主な機能 @@ -46,13 +71,13 @@ title: バージョン管理 1. 右上の**バージョン管理**ボタンをクリックして、バージョン管理インターフェースにアクセスします。 -![view_all_versions](https://assets-docs.dify.ai/2025/03/eed667bbc9498425342c09039054cf98.png) +![View all versions](https://assets-docs.dify.ai/2025/03/eed667bbc9498425342c09039054cf98.png) 2. バージョン管理インターフェースには、時系列の降順で並べられたバージョンリストが表示されます。各バージョンの**名前、説明情報、公開日時、公開者**を確認できます。 3. *(オプション)* バージョンリストが多数ある場合は、**さらに読み込む**ボタンをクリックすると、より多くのバージョン履歴を表示できます。 -![load_more](https://assets-docs.dify.ai/2025/03/df9aeb06128f11089dc2294f0338e2ca.png) +![Load more](https://assets-docs.dify.ai/2025/03/df9aeb06128f11089dc2294f0338e2ca.png) ## 特定のバージョンを検索する方法 @@ -62,21 +87,33 @@ title: バージョン管理 - **自分が公開したバージョン**: あなたが公開したバージョンのみを表示します。 必要に応じて、適切なフィルターを選択して、対応するバージョンをご確認いただけます。 -all_or_only_yours +

+ +

- **名前付きバージョンを検索**: 名前が付けられたバージョンのみを表示したい場合は、**名前付きバージョンを検索**オプションをクリックしてください。このオプションを有効にすると、名前付きバージョンのみがバージョンリストに表示され、名前のないバージョンは非表示になります。 -only_show_named_versions +

+ +

## 新しいバージョンの公開方法 1. チャットフロー/ワークフローの作成が完了したら、画面右上の**公開する > 公開更新**をクリックすると、現在のバージョンが公開されます。 -![publish_new_version](https://assets-docs.dify.ai/2025/03/26f3f324ab4ecb965708d553ddd78d97.png) +![Publish new version](https://assets-docs.dify.ai/2025/03/26f3f324ab4ecb965708d553ddd78d97.png) 2. 公開後、この最新バージョンは`Latest`としてマークされ、関連情報がバージョン管理インターフェースに表示されます。 -![latest_version_marked](https://assets-docs.dify.ai/2025/03/67e95de17577bc272addad6c33f8ea59.png) +![Latest version marked](https://assets-docs.dify.ai/2025/03/67e95de17577bc272addad6c33f8ea59.png) ## 公開済みバージョンの情報編集方法 @@ -84,15 +121,27 @@ title: バージョン管理 - 以前にデフォルト名でバージョンを保存した場合は、**このバージョンに名前を付ける**をクリックします。 -name_this_version +

+ +

- すでに名前を付けている場合は、**バージョン情報を編集**をクリックして、バージョン名と説明を修正できます。 -edit_version_info_1 +

+ +

2. **公開する**をクリックして、バージョン情報を公開します。 -![edit_version_info_2](https://assets-docs.dify.ai/2025/03/838e5a12aa277bada6c2a4a214450fa5.jpg) +![Edit version info](https://assets-docs.dify.ai/2025/03/838e5a12aa277bada6c2a4a214450fa5.jpg) ## 履歴バージョンを削除するには? @@ -100,14 +149,22 @@ title: バージョン管理 2. **削除**を選択すると、確認ダイアログが表示されます。 +

+ +

+ 3. **削除**をクリックすると、そのバージョンがバージョン管理画面から削除されます。 -![delete_version_confirm](https://assets-docs.dify.ai/2025/03/9326fd0463d024aac1907c83a37fe13b.jpg) +![Delete Version Confirmed](https://assets-docs.dify.ai/2025/03/9326fd0463d024aac1907c83a37fe13b.jpg) - + - **下書きバージョン**(Current Draft)は、現在のチャットフロー/ワークフロー画面で編集中のバージョンであり、削除できません。 - **最新公開バージョン**(「Latest」とマークされているバージョン)は、ユーザーが最後に公開したバージョンであり、削除できません。 - + ## 特定の公開済みバージョンに戻すには? @@ -115,9 +172,17 @@ title: バージョン管理 2. **ロールバック**を選択すると、確認ダイアログが表示されます。 +

+ +

+ 3. **ロールバック**をクリックすると、現在の下書きバージョンがその履歴バージョンに置き換えられます。 -![restore_version_confirm](https://assets-docs.dify.ai/2025/03/f3a6e13f2e910f5c7917f52fe77bdfca.jpg) +![Restore version confirmed](https://assets-docs.dify.ai/2025/03/f3a6e13f2e910f5c7917f52fe77bdfca.jpg) ## 使用シナリオ @@ -136,7 +201,7 @@ title: バージョン管理 - Version Aが公開され、**最新公開バージョン**になります。 - システムが自動的に**下書きバージョン** Version Bを作成します。 -![phase 2](https://assets-docs.dify.ai/2025/03/3d1f66cdeb08710f01462a6b0f3ed0a8.jpeg) +![Phase 2](https://assets-docs.dify.ai/2025/03/3d1f66cdeb08710f01462a6b0f3ed0a8.jpeg) ### ステージ3:再公開 @@ -144,14 +209,14 @@ title: バージョン管理 - Version Aは**履歴公開バージョン**になります。 - システムが自動的に**下書きバージョン** Version Cを作成します。 -![phase 3](https://assets-docs.dify.ai/2025/03/92ffbf88a3cbeeeeab47c1bd8b4f7198.jpeg) +![Phase 3](https://assets-docs.dify.ai/2025/03/92ffbf88a3cbeeeeab47c1bd8b4f7198.jpeg) ### ステージ4:ロールバック操作 - Version Aが**下書きバージョン**として復元され、Version Cが上書きされます。 - Version Bは引き続き**最新公開バージョン**です。 -![phase 4](https://assets-docs.dify.ai/2025/03/541f1891416af90dab5b51bfec833249.jpeg) +![Phase 4](https://assets-docs.dify.ai/2025/03/541f1891416af90dab5b51bfec833249.jpeg) ### ステージ5:ロールバック後の公開 @@ -159,22 +224,57 @@ title: バージョン管理 - 以前のVersion AとVersion Bは**履歴公開バージョン**になります。 - システムが自動的に**下書きバージョン** Version Dを作成します。 -![phase 5](https://assets-docs.dify.ai/2025/03/3572a4f2edef166c3f14e4ec4e68b297.jpeg) +![Phase 5](https://assets-docs.dify.ai/2025/03/3572a4f2edef166c3f14e4ec4e68b297.jpeg) ### 全体フロー -![workflow](https://assets-docs.dify.ai/2025/03/dc7c15a4dfafb72ce7fffea294d5b5e5.gif) +![Workflow](https://assets-docs.dify.ai/2025/03/dc7c15a4dfafb72ce7fffea294d5b5e5.gif) ## よくある質問 - **下書きバージョン、公開済みバージョン、最新公開バージョン、履歴公開バージョンの違いは何ですか?** -| 定義 | 操作方法 | オンラインアクセス | 削除可能か | ロールバック可能か | -|------|---------|-----------------|------------|----------------| -| 下書きバージョン | 編集・修正を行い、公開(Publish)操作でオンライン環境に反映させることができます。 | 不可(公開操作後のみアクセス可能) | 削除できません。 | ロールバックできません。 | -| 最新公開バージョン | 直接編集はできません。新しいドラフトバージョンを作成し、公開することで更新できます。 | 可能(現在のオンラインバージョン) | 削除できません。 | 可能 | -| 過去の公開バージョン | ロールバック(Restore)操作で過去のバージョンをドラフトバージョンに読み込み、編集・公開できます。 | 不可(バージョンリストにのみ存在) | 削除できません。 | 可能 | -| 公開済みバージョン | 最新公開バージョンと過去の公開バージョンの総称です。 | - | - | - | + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
定義操作方法オンラインアクセス削除可能かロールバック可能か
下書きバージョン編集・修正を行い、公開(Publish)操作でオンライン環境に反映させることができます。不可(公開操作後のみアクセス可能)削除できません。ロールバックできません。
最新公開バージョン直接編集はできません。新しいドラフトバージョンを作成し、公開することで更新できます。可能(現在のオンラインバージョン)削除できません。可能
過去の公開バージョンロールバック(Restore)操作で過去のバージョンをドラフトバージョンに読み込み、編集・公開できます。不可(バージョンリストにのみ存在)削除できません。可能
公開済みバージョン最新公開バージョンと過去の公開バージョンの総称です。---
## よくある質問 @@ -186,4 +286,4 @@ title: バージョン管理 - **バージョン管理機能は、どのタイプのアプリで利用できますか?** -現在、バージョン管理機能は**チャットフロー**と**ワークフロー**のみ対応しています。**チャットアシスタント**、**テキスト生成**、**エージェント**には対応していません。 \ No newline at end of file +現在、バージョン管理機能は**チャットフロー**と**ワークフロー**のみ対応しています。**チャットボット**、**テキスト生成**、**エージェント**には対応していません。 diff --git a/ja-jp/guides/model-configuration/README.mdx b/ja-jp/guides/model-configuration/README.mdx new file mode 100644 index 00000000..a37773b5 --- /dev/null +++ b/ja-jp/guides/model-configuration/README.mdx @@ -0,0 +1,81 @@ +--- +title: モデル +--- + + +Difyは大規模言語モデルに基づいたAIアプリケーション開発プラットフォームです。初めて使用する際には、Difyの**設定 -- モデルプロバイダー**ページで必要なモデルを追加および設定してください。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/458a37f10d6f0d08c3701aa86c447983.png) + +Difyは現在、OpenAIのGPTシリーズやAnthropicのClaudeシリーズなど、主流のモデルプロバイダーをサポートしています。異なるモデルの能力やパラメータの種類が異なるため、アプリケーションのニーズに応じて適切なモデルプロバイダーを選択できます。**Difyで以下のモデル能力を使用する前に、各モデルプロバイダーの公式サイトでAPIキーを取得する必要があります。** + +### モデルタイプ + +Difyでは、モデルの使用シーンに応じて以下の4つのタイプに分類しています: + +1. **システム推論モデル**。アプリケーション内で使用されるのはこのタイプのモデルです。チャット、会話名生成、次の質問の提案でもこの推論モデルが使用されます。 + + > サポートされているシステム推論モデルプロバイダー:[OpenAI](https://platform.openai.com/account/api-keys)、[Azure OpenAIサービス](https://azure.microsoft.com/en-us/products/ai-services/openai-service/)、[Anthropic](https://console.anthropic.com/account/keys)、Hugging Faceハブ、Replicate、Xinference、OpenLLM、[讯飞星火](https://www.xfyun.cn/solutions/xinghuoAPI)、[文心一言](https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application)、[通义千问](https://dashscope.console.aliyun.com/api-key\_management?spm=a2c4g.11186623.0.0.3bbc424dxZms9k)、[Minimax](https://api.minimax.chat/user-center/basic-information/interface-key)、ZHIPU(ChatGLM) +2. **埋め込みモデル**。データセット内の分割された文書の埋め込みに使用されるのはこのタイプのモデルです。データセットを使用するアプリケーションでは、ユーザーの質問を埋め込み処理する際にもこのタイプのモデルが使用されます。 + + > サポートされている埋め込みモデルプロバイダー:OpenAI、ZHIPU(ChatGLM)、Jina AI([Jina Embeddings](https://jina.ai/embeddings/)) +3. [**Rerankモデル**](https://docs.dify.ai/v/ja-jp/learn-more/extended-reading/retrieval-augment/rerank)。**Rerankモデルは検索能力を強化し、LLMの検索結果を改善するために使用されます。** + + > サポートされているRerankモデルプロバイダー:Cohere、Jina AI([Jina Reranker](https://jina.ai/reranker)) +4. **音声からテキストへのモデル**。対話型アプリケーションで音声をテキストに変換する際に使用されるのはこのタイプのモデルです。 + + > サポートされている音声からテキストへのモデルプロバイダー:OpenAI + +技術の進化とユーザーのニーズに応じて、今後もさらに多くのLLMプロバイダーをサポートしていきます。 + +### ホストモデル試用サービス + +Difyクラウドサービスのユーザーには、異なるモデルの試用枠を提供しています。この枠が尽きる前に自分のモデルプロバイダーを設定してください。さもないと、アプリケーションの正常な使用に影響を及ぼす可能性があります。 + +* **OpenAIホストモデル試用:** GPT3.5-turbo、GPT3.5-turbo-16k、text-davinci-003モデルの試用として200回の呼び出し回数を提供します。 + +### デフォルトモデルの設定 + +Difyは使用シーンに応じて設定されたデフォルトモデルを選択します。`設定 > モデルプロバイダー`でデフォルトモデルを設定します。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/22d9f76bae0f0b0a492ce465d3cb0f38.png) + +システム推論モデル:アプリケーションの作成に使用されるデフォルトの推論モデルを設定し、対話名の生成や次のステップの質問に関する提案などの機能も含まれます。 + +### モデルの接続設定 + +Difyの`設定 > モデルプロバイダー`で接続するモデルを設定します。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/a5ec4a8508382ff895a375aa19b84e53.png) + +モデルプロバイダーは2種類に分かれます: + +1. 自社モデル。このタイプのモデルプロバイダーは自社で開発したモデルを提供します。例としてOpenAI、Anthropicなどがあります。 +2. ホストモデル。このタイプのモデルプロバイダーは第三者のモデルを提供します。例としてHugging Face、Replicateなどがあります。 + +Difyで異なるタイプのモデルプロバイダーを接続する方法は若干異なります。 + +**自社モデルのモデルプロバイダーの接続** + +自社モデルのプロバイダーを接続すると、Difyはそのプロバイダーのすべてのモデルに自動的に接続します。 + +Difyで対応するモデルプロバイダーのAPIキーを設定するだけで、そのモデルプロバイダーに接続できます。 + + +Difyは[PKCS1\_OAEP](https://pycryptodome.readthedocs.io/en/latest/src/cipher/oaep.html)を使用してユーザーが管理するAPIキーを暗号化して保存しています。各テナントは独立した鍵ペアを使用して暗号化しており、APIキーの漏洩を防止します。 + + +**ホストモデルのモデルプロバイダーの接続** + +ホストタイプのプロバイダーには多くの第三者モデルがあります。モデルの接続には個別に追加が必要です。具体的な接続方法は以下の通りです: + +* [Hugging Face](hugging-face.md) +* [Replicate](replicate.md) +* [Xinference](xinference.md) +* [OpenLLM](openllm.md) + +### モデルの使用 + +モデルの設定が完了したら、アプリケーションでこれらのモデルを使用できます: + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/961fa0a4c3b9e11615170b9851e6a583.png) diff --git a/ja-jp/guides/model-configuration/customizable-model.mdx b/ja-jp/guides/model-configuration/customizable-model.mdx new file mode 100644 index 00000000..4b78c797 --- /dev/null +++ b/ja-jp/guides/model-configuration/customizable-model.mdx @@ -0,0 +1,303 @@ +--- +title: カスタムモデルの追加 +--- + + +### イントロダクション + +ベンダー統合が完了した後、次にベンダーの下でモデルのインテグレーションを行います。ここでは、全体のプロセスを理解するために、例として`Xinference`を使用して、段階的にベンダーのインテグレーションを完了します。 + +注意が必要なのは、カスタムモデルの場合、各モデルのインテグレーションには完全なベンダークレデンシャルの記入が必要です。 + +事前定義モデルとは異なり、カスタムベンダーのインテグレーション時には常に以下の2つのパラメータが存在し、ベンダー yaml に定義する必要はありません。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/244cf2b0a126ed360f9588fbf1050e03.png) + +前述したように、ベンダーは`validate_provider_credential`を実装する必要はなく、Runtimeがユーザーが選択したモデルタイプとモデル名に基づいて、対応するモデル層の`validate_credentials`を呼び出して検証を行います。 + +#### ベンダー yaml の作成 + +まず、インテグレーションを行うベンダーがどのタイプのモデルをサポートしているかを確認します。 + +現在サポートされているモデルタイプは以下の通りです: + +* `llm` テキスト生成モデル +* `text_embedding` テキスト Embedding モデル +* `rerank` Rerank モデル +* `speech2text` 音声からテキスト変換 +* `tts` テキストから音声変換 +* `moderation` モデレーション + +`Xinference`は`LLM`、`Text Embedding`、`Rerank`をサポートしているため、`xinference.yaml`を作成します。 + +```yaml +provider: xinference # Specify vendor identifier +label: # Vendor display name, can be set in en_US (English) and zh_Hans (Simplified Chinese). If zh_Hans is not set, en_US will be used by default. + en_US: Xorbits Inference +icon_small: # Small icon, refer to other vendors' icons, stored in the _assets directory under the corresponding vendor implementation directory. Language strategy is the same as label. + en_US: icon_s_en.svg +icon_large: # Large icon + en_US: icon_l_en.svg +help: # Help + title: + en_US: How to deploy Xinference + zh_Hans: 如何部署 Xinference + url: + en_US: https://github.com/xorbitsai/inference +supported_model_types: # Supported model types. Xinference supports LLM/Text Embedding/Rerank +- llm +- text-embedding +- rerank +configurate_methods: # Since Xinference is a locally deployed vendor and does not have predefined models, you need to deploy the required models according to Xinference's documentation. Therefore, only custom models are supported here. +- customizable-model +provider_credential_schema: + credential_form_schemas: +``` + +その後、Xinferenceでモデルを定義するために必要なクレデンシャルを考えます。 + +* 3つの異なるモデルをサポートするため、`model_type`を使用してこのモデルのタイプを指定する必要があります。3つのタイプがあるので、次のように記述します。 + +```yaml +provider_credential_schema: + credential_form_schemas: + - variable: model_type + type: select + label: + en_US: Model type + zh_Hans: 模型类型 + required: true + options: + - value: text-generation + label: + en_US: Language Model + zh_Hans: 言語モデル + - value: embeddings + label: + en_US: Text Embedding + - value: reranking + label: + en_US: Rerank +``` + +* 各モデルには独自の名称`model_name`があるため、ここで定義する必要があります。 + +```yaml + - variable: model_name + type: text-input + label: + en_US: Model name + zh_Hans: 模型名称 + required: true + placeholder: + zh_Hans: 填写模型名称 + en_US: Input model name +``` + +* Xinferenceのローカルデプロイのアドレスを記入します。 + +```yaml + - variable: server_url + label: + zh_Hans: 服务器URL + en_US: Server url + type: text-input + required: true + placeholder: + zh_Hans: 在此输入Xinference的服务器地址,如 https://example.com/xxx + en_US: Enter the url of your Xinference, for example https://example.com/xxx +``` + +* 各モデルには一意の model\_uid があるため、ここで定義する必要があります。 + +```yaml + - variable: model_uid + label: + zh_Hans: 模型 UID + en_US: Model uid + type: text-input + required: true + placeholder: + zh_Hans: 在此输入你的 Model UID + en_US: Enter the model uid +``` + +これで、ベンダーの基本定義が完了しました。 + +#### モデルコードの作成 + +次に、`llm`タイプを例にとって、`xinference.llm.llm.py`を作成します。 + +`llm.py`内で、Xinference LLM クラスを作成し、`XinferenceAILargeLanguageModel`(任意の名前)と名付けて、`__base.large_language_model.LargeLanguageModel`基底クラスを継承し、以下のメソッドを実装します: + +* LLM 呼び出し + + LLM 呼び出しのコアメソッドを実装し、ストリームレスポンスと同期レスポンスの両方をサポートします。 + + ```python + def _invoke(self, model: str, credentials: dict, + prompt_messages: list[PromptMessage], model_parameters: dict, + tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, + stream: bool = True, user: Optional[str] = None) \ + -> Union[LLMResult, Generator]: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param model_parameters: model parameters + :param tools: tools for tool calling + :param stop: stop words + :param stream: is stream response + :param user: unique user id + :return: full response or stream response chunk generator result + """ + ``` + + 実装時には、同期レスポンスとストリームレスポンスを処理するために2つの関数を使用してデータを返す必要があります。Pythonは`yield`キーワードを含む関数をジェネレータ関数として認識し、返されるデータ型は固定でジェネレーターになります。そのため、同期レスポンスとストリームレスポンスは別々に実装する必要があります。以下のように実装します(例では簡略化されたパラメータを使用していますが、実際の実装では上記のパラメータリストに従って実装してください): + + ```python + def _invoke(self, stream: bool, **kwargs) \ + -> Union[LLMResult, Generator]: + if stream: + return self._handle_stream_response(**kwargs) + return self._handle_sync_response(**kwargs) + + def _handle_stream_response(self, **kwargs) -> Generator: + for chunk in response: + yield chunk + def _handle_sync_response(self, **kwargs) -> LLMResult: + return LLMResult(**response) + ``` +* 予測トークン数の計算 + + モデルが予測トークン数の計算インターフェースを提供していない場合、直接0を返すことができます。 + + ```python + def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], + tools: Optional[list[PromptMessageTool]] = None) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param tools: tools for tool calling + :return: + """ + ``` + + 時には、直接0を返す必要がない場合もあります。その場合は`self._get_num_tokens_by_gpt2(text: str)`を使用して予測トークン数を取得することができます。このメソッドは`AIModel`基底クラスにあり、GPT2のTokenizerを使用して計算を行いますが、代替方法として使用されるものであり、完全に正確ではありません。 +* モデルクレデンシャル検証 + + ベンダークレデンシャル検証と同様に、ここでは個々のモデルについて検証を行います。 + + ```python + def validate_credentials(self, model: str, credentials: dict) -> None: + """ + Validate model credentials + + :param model: model name + :param credentials: model credentials + :return: + """ + ``` +* モデルパラメータスキーマ + + カスタムタイプとは異なり、yamlファイルでモデルがサポートするパラメータを定義していないため、動的にモデルパラメータのスキーマを生成する必要があります。 + + 例えば、Xinferenceは`max_tokens`、`temperature`、`top_p`の3つのモデルパラメータをサポートしています。 + + しかし、ベンダーによっては異なるモデルに対して異なるパラメータをサポートしている場合があります。例えば、ベンダー`OpenLLM`は`top_k`をサポートしていますが、全てのモデルが`top_k`をサポートしているわけではありません。ここでは、例としてAモデルが`top_k`をサポートし、Bモデルが`top_k`をサポートしていない場合、以下のように動的にモデルパラメータのスキーマを生成します: + + ```python + def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None: + """ + used to define customizable model schema + """ + rules = [ + ParameterRule( + name='temperature', type=ParameterType.FLOAT, + use_template='temperature', + label=I18nObject( + zh_Hans='温度', en_US='Temperature' + ) + ), + ParameterRule( + name='top_p', type=ParameterType.FLOAT, + use_template='top_p', + label=I18nObject( + zh_Hans='Top P', en_US='Top P' + ) + ), + ParameterRule( + name='max_tokens', type=ParameterType.INT, + use_template='max_tokens', + min=1, + default=512, + label=I18nObject( + zh_Hans='最大生成长度', en_US='Max Tokens' + ) + ) + ] + + # if model is A, add top_k to rules + if model == 'A': + rules.append( + ParameterRule( + name='top_k', type=ParameterType.INT, + use_template='top_k', + min=1, + default=50, + label=I18nObject( + zh_Hans='Top K', en_US='Top K' + ) + ) + ) + + """ + some NOT IMPORTANT code here + """ + + entity = AIModelEntity( + model=model, + label=I18nObject( + en_US=model + ), + fetch_from=FetchFrom.CUSTOMIZABLE_MODEL, + model_type=model_type, + model_properties={ + ModelPropertyKey.MODE: ModelType.LLM, + }, + parameter_rules=rules + ) + + return entity + ``` +* 呼び出しエラーマッピングテーブル + + モデル呼び出し時にエラーが発生した場合、Runtimeが指定する`InvokeError`タイプにマッピングする必要があります。これにより、Difyは異なるエラーに対して異なる後続処理を行うことができます。 + + Runtime Errors: + + * `InvokeConnectionError` 呼び出し接続エラー + * `InvokeServerUnavailableError` 呼び出しサービスが利用不可 + * `InvokeRateLimitError` 呼び出し回数制限に達した + * `InvokeAuthorizationError` 認証エラー + * `InvokeBadRequestError` 不正なリクエストパラメータ + + ```python + @property + def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + """ + Map model invoke error to unified error + The key is the error type thrown to the caller + The value is the error type thrown by the model, + which needs to be converted into a unified error type for the caller. + + :return: Invoke error mapping + """ + ``` + +インターフェース方法の詳細については:[インターフェース](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/interfaces.md)をご覧ください。具体的な実装例については、[llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py)を参照してください。 diff --git a/ja-jp/guides/model-configuration/interfaces.mdx b/ja-jp/guides/model-configuration/interfaces.mdx new file mode 100644 index 00000000..fb1b8aa1 --- /dev/null +++ b/ja-jp/guides/model-configuration/interfaces.mdx @@ -0,0 +1,1447 @@ +--- +title: インターフェース方法 +--- + + +ここでは、サプライヤーと各モデルタイプが実装する必要があるインターフェース方法とそのパラメータについて説明します。 + +## サプライヤー + +`__base.model_provider.ModelProvider`基本クラスを継承し、以下のインターフェースを実装する必要があります: + +```python +def validate_provider_credentials(self, credentials: dict) -> None: + """ + Validate provider credentials + You can choose any validate_credentials method of model type or implement validate method by yourself, + such as: get model list api + + if validate failed, raise exception + + :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. + """ +``` + +- `credentials` (object) 資格情報 + + 資格情報のパラメータは、サプライヤーのYAML構成ファイルの `provider_credential_schema` で定義され、`api_key`などが渡されます。 + +検証に失敗した場合は、`errors.validate.CredentialsValidateFailedError`エラーをスローします。 + +**注:事前定義されたモデルはこのインターフェースを完全に実装する必要がありますが、カスタムモデルサプライヤーは以下の簡単な実装のみが必要です** + +```python +class XinferenceProvider(Provider): + def validate_provider_credentials(self, credentials: dict) -> None: + pass +``` + +## モデル + +モデルには5つの異なるモデルタイプがあり、異なる基底クラスを継承し、実装する必要があるメソッドも異なります。 + +### 一般インターフェース + +すべてのモデルには以下の2つのメソッドを実装する必要があります: + +- モデルの資格情報を検証する + + サプライヤーの資格情報検証と同様に、ここでは個々のモデルに対して検証を行います。 + + ```python + def validate_credentials(self, model: str, credentials: dict) -> None: + """ + Validate model credentials + + :param model: model name + :param credentials: model credentials + :return: + """ + ``` + + パラメータ: + + - `model` (string) モデル名 + + - `credentials` (object) 資格情報 + + 資格情報のパラメータは、供給業者の YAML 構成ファイルの provider_credential_schema または model_credential_schema で定義されており、api_key などの詳細が含まれます。 + + 検証に失敗した場合は、`errors.validate.CredentialsValidateFailedError` エラーをスローします。 + +- 例外エラーマッピングの呼び出し + + モデルの呼び出しが例外をスローした場合、Runtimeが指定する `InvokeError` タイプにマッピングする必要があり、異なるエラーに対して異なる後続処理を行うためのDifyにとって便利です。 + + Runtime Errors: + + - `InvokeConnectionError` コール接続エラー + - `InvokeServerUnavailableError ` コールサービスが利用できない + - `InvokeRateLimitError ` コールが制限に達した + - `InvokeAuthorizationError` コール認証エラー + - `InvokeBadRequestError ` コールパラメータが誤っています + + ```python + @property + def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + """ + Map model invoke error to unified error + The key is the error type thrown to the caller + The value is the error type thrown by the model, + which needs to be converted into a unified error type for the caller. + + :return: Invoke error mapping + """ + ``` + + または、対応するエラーを直接スローし、以下のように定義することもできます。これにより、後続の呼び出しで `InvokeConnectionError` などの例外を直接スローできます。 + + ```python + @property + def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + return { + InvokeConnectionError: [ + InvokeConnectionError + ], + InvokeServerUnavailableError: [ + InvokeServerUnavailableError + ], + InvokeRateLimitError: [ + InvokeRateLimitError + ], + InvokeAuthorizationError: [ + InvokeAuthorizationError + ], + InvokeBadRequestError: [ + InvokeBadRequestError + ], + } + ``` + +​ OpenAIの `_invoke_error_mapping` をご参照ください。 + +### LLM + +`__base.large_language_model.LargeLanguageModel` 基本クラスを継承し、以下のインターフェースを実装します: + +- LLMの呼び出し + + LLM呼び出しの核心メソッドを実装し、ストリーミングと同期応答の両方をサポートします。 + + ```python + def _invoke(self, model: str, credentials: dict, + prompt_messages: list[PromptMessage], model_parameters: dict, + tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None, + stream: bool = True, user: Optional[str] = None) \ + -> Union[LLMResult, Generator]: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param model_parameters: model parameters + :param tools: tools for tool calling + :param stop: stop words + :param stream: is stream response + :param user: unique user id + :return: full response or stream response chunk generator result + """ + ``` + + - パラメータ: + + - `model` (string) モデル名 + + - `credentials` (object) 資格情報 + + 資格情報のパラメータは、供給業者の YAML 構成ファイルの provider_credential_schema または model_credential_schema で定義されており、api_key などの詳細が含まれます。 + + - `prompt_messages` (array[[PromptMessage](#PromptMessage)]) Prompt リスト + + モデルのタイプが `Completion` の場合、リストには1つの[UserPromptMessage](#UserPromptMessage) 要素のみを渡す必要があります; + + モデルのタイプが `Chat` の場合、[SystemPromptMessage](#SystemPromptMessage), [UserPromptMessage](#UserPromptMessage), [AssistantPromptMessage](#AssistantPromptMessage), [ToolPromptMessage](#ToolPromptMessage) 要素のリストをメッセージに応じて渡す必要があります。 + + - `model_parameters` (object) モデルのパラメータ + + モデルのパラメータは、モデルのYAML設定における`parameter_rules`で定義されています。 + + - `tools` (array[[PromptMessageTool](#PromptMessageTool)]) [optional] ツールのリスト,`function calling` 内の `function` に相当します。 + + つまり、tool calling のためのツールリストを指定します。 + + - `stop` (array[string]) [optional] ストップシーケンス + + モデルが出力を返す際に、定義された文字列の前で出力を停止します。 + + - `stream` (bool) ストリーム出力の有無、デフォルトはTrue + + ストリーム出力の場合は Generator[[LLMResultChunk](#LLMResultChunk)],出力ではないの場合は [LLMResult](#LLMResult)。 + + - `user` (string) [optional] ユーザーの一意の識別子 + + 供給業者が不正行為を監視および検出するのに役立ちます。 + + - 返り値 + + ストリーム出力の場合は Generator[[LLMResultChunk](#LLMResultChunk)],出力ではないの場合は [LLMResult](#LLMResult)。 + +- 入力tokenの事前計算 + + モデルがtokenの事前計算インターフェースを提供していない場合、直接0を返すことができます。 + + ```python + def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], + tools: Optional[list[PromptMessageTool]] = None) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param tools: tools for tool calling + :return: + """ + ``` + + パラメータの説明は上記の `LLMの呼び出し` を参照してください。 + + このインターフェースは、対応する`model`に基づいて適切な`tokenizer`を選択して計算する必要があります。対応するモデルが`tokenizer`を提供していない場合は、`AIModel`ベースクラスの`_get_num_tokens_by_gpt2(text: str)`メソッドを使用して計算できます。 + +- カスタムモデルスキーマの取得 [オプション] + + ```python + def get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]: + """ + Get customizable model schema + + :param model: model name + :param credentials: model credentials + :return: model schema + """ + ``` + +供給業者がカスタムLLMを追加することをサポートしている場合、このメソッドを実装してカスタムモデルがモデル規則を取得できるようにすることができます。デフォルトではNoneを返します。 + +ほとんどの微調整モデルは`OpenAI`供給業者の下で、微調整モデル名を使用してベースモデルを取得できます。例えば、`gpt-3.5-turbo-1106`のような微調整モデル名を使用して、基本モデルの事前定義されたパラメータルールを取得できます。具体的な実装については、[openai](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py)を参照してください。 + +### TextEmbedding + +`__base.text_embedding_model.TextEmbeddingModel`ベースクラスを継承し、次のインターフェースを実装します: + +- Embeddingの呼び出し + + ```python + def _invoke(self, model: str, credentials: dict, + texts: list[str], user: Optional[str] = None) \ + -> TextEmbeddingResult: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param texts: texts to embed + :param user: unique user id + :return: embeddings result + """ + ``` + + - パラメータ: + + - `model` (string) モデル名 + + - `credentials` (object) 資格情報 + + 資格情報のパラメータは、供給業者の YAML 構成ファイルの provider_credential_schema または model_credential_schema で定義されており、api_key などの詳細が含まれます。 + + - `texts` (array[string]) テキストリスト,バッチで処理できる + + - `user` (string) [optional] ユーザーの一意の識別子 + + 供給業者が不正行為を監視および検出するのに役立ちます。 + + - 返り値: + + [TextEmbeddingResult](#TextEmbeddingResult) エンティティ。 + +- tokensの事前計算 + + ```python + def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param texts: texts to embed + :return: + """ + ``` + + パラメータの説明は上記の `Embeddingの呼び出し` を参照してください。 + + 上記の`LargeLanguageModel`と同様に、このインターフェースは、対応する`model`に基づいて適切な`tokenizer`を選択して計算する必要があります。対応するモデルが`tokenizer`を提供していない場合は、`AIModel`ベースクラスの`_get_num_tokens_by_gpt2(text: str)`メソッドを使用して計算できます。 + +### Rerank + +`__base.rerank_model.RerankModelベースクラスを継承し、次のインターフェースを実装します: + +- rerankの呼び出し + + ```python + def _invoke(self, model: str, credentials: dict, + query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, + user: Optional[str] = None) \ + -> RerankResult: + """ + Invoke rerank model + + :param model: model name + :param credentials: model credentials + :param query: search query + :param docs: docs for reranking + :param score_threshold: score threshold + :param top_n: top n + :param user: unique user id + :return: rerank result + """ + ``` + + - パラメータ: + + - `model` (string) モデル名 + + - `credentials` (object) 資格情報 + + 資格情報のパラメータは、供給業者の YAML 構成ファイルの provider_credential_schema または model_credential_schema で定義されており、api_key などの詳細が含まれます。 + + - `query` (string) リクエスト内容をチェックする + + - `docs` (array[string]) 並べ替えが必要なセクションリスト + + - `score_threshold` (float) [optional] Scoreの閾値 + + - `top_n` (int) [optional] トップのnセクションを取得します + + - `user` (string) [optional] ユーザーの一意の識別子 + + 供給業者が不正行為を監視および検出するのに役立ちます。 + + - 返り値: + + [RerankResult](#RerankResult) エンティティ。 + +### Speech2text + +`__base.speech2text_model.Speech2TextModel`基底クラスを継承し、以下のインターフェースを実装します: + +- Invokeの呼び出し + + ```python + def _invoke(self, model: str, credentials: dict, + file: IO[bytes], user: Optional[str] = None) \ + -> str: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param file: audio file + :param user: unique user id + :return: text for given audio file + """ + ``` + + - パラメータ: + + - `model` (string) モデル名 + + - `credentials` (object) 資格情報 + + 資格情報のパラメータは、供給業者の YAML 構成ファイルの provider_credential_schema または model_credential_schema で定義されており、api_key などの詳細が含まれます。 + + - `file` (File) ファイルストリーム + + - `user` (string) [optional] ユーザーの一意の識別子 + + 供給業者が不正行為を監視および検出するのに役立ちます。 + + - 返り値: + + 音声をテキストに変換した結果を返します。 + +### Text2speech + +`__base.text2speech_model.Text2SpeechModel`基底クラスを継承し、以下のインターフェースを実装します: + +- Invokeの呼び出し + + ```python + def _invoke(self, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None): + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param content_text: text content to be translated + :param streaming: output is streaming + :param user: unique user id + :return: translated audio file + """ + ``` + + - パラメータ: + + - `model` (string) モデル名 + + - `credentials` (object) 資格情報 + + 資格情報のパラメータは、供給業者の YAML 構成ファイルの provider_credential_schema または model_credential_schema で定義されており、api_key などの詳細が含まれます。 + + - `content_text` (string) 変換すべきテキストコンテンツ + + - `streaming` (bool) ストリーミング出力かどうか + + - `user` (string) [optional] ユーザーの一意の識別子 + + 供給業者が不正行為を監視および検出するのに役立ちます。 + + - 返り値: + + テキストを音声に変換した結果を返します。 + +### Moderation + +`__base.moderation_model.ModerationModel`基底クラスを継承し、以下のインターフェースを実装します: + +- Invokeの呼び出し + + ```python + def _invoke(self, model: str, credentials: dict, + text: str, user: Optional[str] = None) \ + -> bool: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param text: text to moderate + :param user: unique user id + :return: false if text is safe, true otherwise + """ + ``` + + - パラメータ: + + - `model` (string) モデル名 + + - `credentials` (object) 資格情報 + + 資格情報のパラメータは、供給業者の YAML 構成ファイルの provider_credential_schema または model_credential_schema で定義されており、api_key などの詳細が含まれます。 + + - `text` (string) テキスト内容 + + - `user` (string) [optional] ユーザーの一意の識別子 + + 供給業者が不正行為を監視および検出するのに役立ちます。 + + - 返り値: + + False の場合は入力したテキストは安全であり、True の場合はその逆。 + + + +## エンティティ + +### PromptMessageRole + +メッセージロールを定義する列挙型。 + +```python +class PromptMessageRole(Enum): + """ + Enum class for prompt message. + """ + SYSTEM = "system" + USER = "user" + ASSISTANT = "assistant" + TOOL = "tool" +``` + +### PromptMessageContentType + +メッセージコンテンツのタイプを定義し、テキストと画像の2種類がある。 + +```python +class PromptMessageContentType(Enum): + """ + Enum class for prompt message content type. + """ + TEXT = 'text' + IMAGE = 'image' +``` + +### PromptMessageContent + +メッセージコンテンツの基底クラスであり、パラメータのみを宣言するため初期化は行えない。 + +```python +class PromptMessageContent(BaseModel): + """ + Model class for prompt message content. + """ + type: PromptMessageContentType + data: str # コンテンツデータ +``` +現在、テキストと画像の2つのタイプがサポートされており、テキストと複数の画像を同時に渡すことができる。 + +テキストと画像を同時に渡す場合は、`TextPromptMessageContent` と `ImagePromptMessageContent` をそれぞれ初期化する必要がある。 + +### TextPromptMessageContent + +```python +class TextPromptMessageContent(PromptMessageContent): + """ + Model class for text prompt message content. + """ + type: PromptMessageContentType = PromptMessageContentType.TEXT +``` + +画像とテキストを一緒に渡す場合、テキストは `content` リストの一部としてこのエンティティを構築する必要がある。 + +### ImagePromptMessageContent + +```python +class ImagePromptMessageContent(PromptMessageContent): + """ + Model class for image prompt message content. + """ + class DETAIL(Enum): + LOW = 'low' + HIGH = 'high' + + type: PromptMessageContentType = PromptMessageContentType.IMAGE + detail: DETAIL = DETAIL.LOW # 解像度 +``` + +画像とテキストを一緒に渡す場合、画像は `content` リストの一部としてこのエンティティを構築する必要がある。 + +`data` には `url` または画像の `base64` でエンコードされた文字列を指定することができる。 + +### PromptMessage + +すべてのロールメッセージの基底クラスであり、パラメータのみを宣言するため初期化はできません。 + +```python +class PromptMessage(ABC, BaseModel): + """ + Model class for prompt message. + """ + role: PromptMessageRole # メッセージロール + content: Optional[str | list[PromptMessageContent]] = None # 2つのタイプ、文字列とコンテンツリストをサポート。コンテンツリストはマルチモーダルのニーズを満たすために使用されます。PromptMessageContentの説明を参照してください。 + name: Optional[str] = None # 名前、オプション +``` + +### UserPromptMessage + +UserMessage ユーザーメッセージを表すクラス。 + +```python +class UserPromptMessage(PromptMessage): + """ + Model class for user prompt message. + """ + role: PromptMessageRole = PromptMessageRole.USER +``` + +### AssistantPromptMessage + +モデルの返信メッセージを表し、通常は `few-shots` やチャット履歴が入力として使用されます。 + +```python +class AssistantPromptMessage(PromptMessage): + """ + Model class for assistant prompt message. + """ + class ToolCall(BaseModel): + """ + Model class for assistant prompt message tool call. + """ + class ToolCallFunction(BaseModel): + """ + Model class for assistant prompt message tool call function. + """ + name: str # ツール名 + arguments: str # ツールパラメータ + + id: str # ツールID。OpenAI tool callの場合のみ有効で、ツール呼び出しのユニークIDです。同じツールを複数回呼び出すことができます。 + type: str # デフォルト function + function: ToolCallFunction # ツール呼び出し情報 + + role: PromptMessageRole = PromptMessageRole.ASSISTANT + tool_calls: list[ToolCall] = [] # モデルの返信としてのツール呼び出し結果(`tools`を渡した場合のみ、モデルがツール呼び出しが必要と判断した場合に返されます) +``` + +`tool_calls` は、モデルに `tools` を渡した後、モデルが返す `tool call` のリストです。 + +### SystemPromptMessage + +システムメッセージを表し、通常はモデルに与えられるシステム命令に使用されます。 + +```python +class SystemPromptMessage(PromptMessage): + """ + Model class for system prompt message. + """ + role: PromptMessageRole = PromptMessageRole.SYSTEM +``` + +### ToolPromptMessage + +ツールメッセージを表し、ツールの実行結果をモデルに渡して次のステップの計画を行います。 + +```python +class ToolPromptMessage(PromptMessage): + """ + Model class for tool prompt message. + """ + role: PromptMessageRole = PromptMessageRole.TOOL + tool_call_id: str # ツール呼び出しID。OpenAI tool callをサポートしない場合、ツール名を渡すこともできます。 +``` + +基类的 `content` 传入工具执行结果。 + +### PromptMessageTool + +```python +class PromptMessageTool(BaseModel): + """ + Model class for prompt message tool. + """ + name: str # ツール名 + description: str # ツールの説明 + parameters: dict # ツールパラメータ dict +``` + +--- + +### LLMResult + +```python +class LLMResult(BaseModel): + """ + Model class for llm result. + """ + model: str # 使用された実際のモデル + prompt_messages: list[PromptMessage] # プロンプトメッセージのリスト + message: AssistantPromptMessage # 返信メッセージ + usage: LLMUsage # 使用したtokenとコスト情報 + system_fingerprint: Optional[str] = None # リクエスト指紋。OpenAIのこのパラメータの定義を参照。 +``` + +### LLMResultChunkDelta + +ストリーム化された各イテレーション内の `delta` エンティティ。 + +```python +class LLMResultChunkDelta(BaseModel): + """ + Model class for llm result chunk delta. + """ + index: int # インデックス + message: AssistantPromptMessage # 返信メッセージ + usage: Optional[LLMUsage] = None # 使用したトークンとコスト情報(最後の1つのみ) + finish_reason: Optional[str] = None # 終了理由(最後の1つのみ) +``` + +### LLMResultChunk + +ストリーム化された各イテレーションのエンティティ。 + +```python +class LLMResultChunk(BaseModel): + """ + Model class for llm result chunk. + """ + model: str # 実際に使用したモデル + prompt_messages: list[PromptMessage] # プロンプトメッセージのリスト + system_fingerprint: Optional[str] = None # リクエスト指紋。OpenAIのこのパラメータの定義を参照。 + delta: LLMResultChunkDelta # 各イテレーションの変更が存在する内容 +``` + +### LLMUsage + +```python +class LLMUsage(ModelUsage): + """ + Model class for llm usage. + """ + prompt_tokens: int # プロンプトで使用したトークン数 + prompt_unit_price: Decimal # プロンプトの単価 + prompt_price_unit: Decimal # プロンプト料金の単位(単価が基づいているトークンの量) + prompt_price: Decimal # プロンプトの料金 + completion_tokens: int # 返答で使用したトークン数 + completion_unit_price: Decimal # 返答の単価 + completion_price_unit: Decimal # 返答料金の単位(単価が基づいているトークンの量) + completion_price: Decimal # 返答の料金 + total_tokens: int # 総使用トークン数 + total_price: Decimal # 総料金 + currency: str # 通貨単位 + latency: float # リクエスト処理時間(秒) +``` + +--- + +### TextEmbeddingResult + +```python +class TextEmbeddingResult(BaseModel): + """ + Model class for text embedding result. + """ + model: str # 実際に使用したモデル + embeddings: list[list[float]] # テキストリストに対応するembeddingベクトルのリスト + usage: EmbeddingUsage # 使用した情報 +``` + +### EmbeddingUsage + +```python +class EmbeddingUsage(ModelUsage): + """ + Model class for embedding usage. + """ + tokens: int # 使用した token 数 + total_tokens: int # 総使用 token 数 + unit_price: Decimal # 単価 + price_unit: Decimal # 価格の単位(単価が基づいているトークンの量) + total_price: Decimal # 総料金 + currency: str # 通貨単位 + latency: float # リクエスト処理時間(s) +``` + +--- + +### RerankResult + +```python +class RerankResult(BaseModel): + """ + Model class for rerank result. + """ + model: str # 実際に使用したモデル + docs: list[RerankDocument] # Rerankされたセグメントリスト +``` + +### RerankDocument + +```python +class RerankDocument(BaseModel): + """ + Model class for rerank document. + """ + index: int # 元の文書の順番 + text: str # 文書のテキスト内容 + score: float # スコア +``` + interfaces: + +```python +def validate_provider_credentials(self, credentials: dict) -> None: + """ + Validate provider credentials + You can choose any validate_credentials method of model type or implement validate method by yourself, + such as: get model list api + + if validate failed, raise exception + + :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. + """ +``` + +- `credentials` (object) Credential information + + The parameters of credential information are defined by the `provider_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + +If verification fails, throw the `errors.validate.CredentialsValidateFailedError` error. + +## Model + +Models are divided into 5 different types, each inheriting from different base classes and requiring the implementation of different methods. + +All models need to uniformly implement the following 2 methods: + +- Model Credential Verification + + Similar to provider credential verification, this step involves verification for an individual model. + + + ```python + def validate_credentials(self, model: str, credentials: dict) -> None: + """ + Validate model credentials + + :param model: model name + :param credentials: model credentials + :return: + """ + ``` + + Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + If verification fails, throw the `errors.validate.CredentialsValidateFailedError` error. + +- Invocation Error Mapping Table + + When there is an exception in model invocation, it needs to be mapped to the `InvokeError` type specified by Runtime. This facilitates Dify's ability to handle different errors with appropriate follow-up actions. + + Runtime Errors: + + - `InvokeConnectionError` Invocation connection error + - `InvokeServerUnavailableError` Invocation service provider unavailable + - `InvokeRateLimitError` Invocation reached rate limit + - `InvokeAuthorizationError` Invocation authorization failure + - `InvokeBadRequestError` Invocation parameter error + + ```python + @property + def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + """ + Map model invoke error to unified error + The key is the error type thrown to the caller + The value is the error type thrown by the model, + which needs to be converted into a unified error type for the caller. + + :return: Invoke error mapping + """ + ``` + +​ You can refer to OpenAI's `_invoke_error_mapping` for an example. + +### LLM + +Inherit the `__base.large_language_model.LargeLanguageModel` base class and implement the following interfaces: + +- LLM Invocation + + Implement the core method for LLM invocation, which can support both streaming and synchronous returns. + + + ```python + def _invoke(self, model: str, credentials: dict, + prompt_messages: list[PromptMessage], model_parameters: dict, + tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, + stream: bool = True, user: Optional[str] = None) \ + -> Union[LLMResult, Generator]: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param model_parameters: model parameters + :param tools: tools for tool calling + :param stop: stop words + :param stream: is stream response + :param user: unique user id + :return: full response or stream response chunk generator result + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `prompt_messages` (array[[PromptMessage](#PromptMessage)]) List of prompts + + If the model is of the `Completion` type, the list only needs to include one [UserPromptMessage](#UserPromptMessage) element; + + If the model is of the `Chat` type, it requires a list of elements such as [SystemPromptMessage](#SystemPromptMessage), [UserPromptMessage](#UserPromptMessage), [AssistantPromptMessage](#AssistantPromptMessage), [ToolPromptMessage](#ToolPromptMessage) depending on the message. + + - `model_parameters` (object) Model parameters + + The model parameters are defined by the `parameter_rules` in the model's YAML configuration. + + - `tools` (array[[PromptMessageTool](#PromptMessageTool)]) [optional] List of tools, equivalent to the `function` in `function calling`. + + That is, the tool list for tool calling. + + - `stop` (array[string]) [optional] Stop sequences + + The model output will stop before the string defined by the stop sequence. + + - `stream` (bool) Whether to output in a streaming manner, default is True + + Streaming output returns Generator[[LLMResultChunk](#LLMResultChunk)], non-streaming output returns [LLMResult](#LLMResult). + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns + + Streaming output returns Generator[[LLMResultChunk](#LLMResultChunk)], non-streaming output returns [LLMResult](#LLMResult). + +- Pre-calculating Input Tokens + + If the model does not provide a pre-calculated tokens interface, you can directly return 0. + + ```python + def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], + tools: Optional[list[PromptMessageTool]] = None) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param tools: tools for tool calling + :return: + """ + ``` + + For parameter explanations, refer to the above section on `LLM Invocation`. + +- Fetch Custom Model Schema [Optional] + + ```python + def get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]: + """ + Get customizable model schema + + :param model: model name + :param credentials: model credentials + :return: model schema + """ + ``` + + When the provider supports adding custom LLMs, this method can be implemented to allow custom models to fetch model schema. The default return null. + + +### TextEmbedding + +Inherit the `__base.text_embedding_model.TextEmbeddingModel` base class and implement the following interfaces: + +- Embedding Invocation + + ```python + def _invoke(self, model: str, credentials: dict, + texts: list[str], user: Optional[str] = None) \ + -> TextEmbeddingResult: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param texts: texts to embed + :param user: unique user id + :return: embeddings result + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `texts` (array[string]) List of texts, capable of batch processing + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + [TextEmbeddingResult](#TextEmbeddingResult) entity. + +- Pre-calculating Tokens + + ```python + def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param texts: texts to embed + :return: + """ + ``` + + For parameter explanations, refer to the above section on `Embedding Invocation`. + +### Rerank + +Inherit the `__base.rerank_model.RerankModel` base class and implement the following interfaces: + +- Rerank Invocation + + ```python + def _invoke(self, model: str, credentials: dict, + query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, + user: Optional[str] = None) \ + -> RerankResult: + """ + Invoke rerank model + + :param model: model name + :param credentials: model credentials + :param query: search query + :param docs: docs for reranking + :param score_threshold: score threshold + :param top_n: top n + :param user: unique user id + :return: rerank result + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `query` (string) Query request content + + - `docs` (array[string]) List of segments to be reranked + + - `score_threshold` (float) [optional] Score threshold + + - `top_n` (int) [optional] Select the top n segments + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + [RerankResult](#RerankResult) entity. + +### Speech2text + +Inherit the `__base.speech2text_model.Speech2TextModel` base class and implement the following interfaces: + +- Invoke Invocation + + ```python + def _invoke(self, model: str, credentials: dict, file: IO[bytes], user: Optional[str] = None) -> str: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param file: audio file + :param user: unique user id + :return: text for given audio file + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `file` (File) File stream + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + The string after speech-to-text conversion. + +### Text2speech + +Inherit the `__base.text2speech_model.Text2SpeechModel` base class and implement the following interfaces: + +- Invoke Invocation + + ```python + def _invoke(self, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None): + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param content_text: text content to be translated + :param streaming: output is streaming + :param user: unique user id + :return: translated audio file + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `content_text` (string) The text content that needs to be converted + + - `streaming` (bool) Whether to stream output + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + Text converted speech stream。 + +### Moderation + +Inherit the `__base.moderation_model.ModerationModel` base class and implement the following interfaces: + +- Invoke Invocation + + ```python + def _invoke(self, model: str, credentials: dict, + text: str, user: Optional[str] = None) \ + -> bool: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param text: text to moderate + :param user: unique user id + :return: false if text is safe, true otherwise + """ + ``` + + - Parameters: + + - `model` (string) Model name + + - `credentials` (object) Credential information + + The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. + + - `text` (string) Text content + + - `user` (string) [optional] Unique identifier of the user + + This can help the provider monitor and detect abusive behavior. + + - Returns: + + False indicates that the input text is safe, True indicates otherwise. + + + +## Entities + +### PromptMessageRole + +Message role + +```python +class PromptMessageRole(Enum): + """ + Enum class for prompt message. + """ + SYSTEM = "system" + USER = "user" + ASSISTANT = "assistant" + TOOL = "tool" +``` + +### PromptMessageContentType + +Message content types, divided into text and image. + +```python +class PromptMessageContentType(Enum): + """ + Enum class for prompt message content type. + """ + TEXT = 'text' + IMAGE = 'image' +``` + +### PromptMessageContent + +Message content base class, used only for parameter declaration and cannot be initialized. + +```python +class PromptMessageContent(BaseModel): + """ + Model class for prompt message content. + """ + type: PromptMessageContentType + data: str +``` + +Currently, two types are supported: text and image. It's possible to simultaneously input text and multiple images. + +You need to initialize `TextPromptMessageContent` and `ImagePromptMessageContent` separately for input. + +### TextPromptMessageContent + +```python +class TextPromptMessageContent(PromptMessageContent): + """ + Model class for text prompt message content. + """ + type: PromptMessageContentType = PromptMessageContentType.TEXT +``` + +If inputting a combination of text and images, the text needs to be constructed into this entity as part of the `content` list. + +### ImagePromptMessageContent + +```python +class ImagePromptMessageContent(PromptMessageContent): + """ + Model class for image prompt message content. + """ + class DETAIL(Enum): + LOW = 'low' + HIGH = 'high' + + type: PromptMessageContentType = PromptMessageContentType.IMAGE + detail: DETAIL = DETAIL.LOW # Resolution +``` + +If inputting a combination of text and images, the images need to be constructed into this entity as part of the `content` list. + +`data` can be either a `url` or a `base64` encoded string of the image. + +### PromptMessage + +The base class for all Role message bodies, used only for parameter declaration and cannot be initialized. + +```python +class PromptMessage(ABC, BaseModel): + """ + Model class for prompt message. + """ + role: PromptMessageRole + content: Optional[str | list[PromptMessageContent]] = None # Supports two types: string and content list. The content list is designed to meet the needs of multimodal inputs. For more details, see the PromptMessageContent explanation. + name: Optional[str] = None +``` + +### UserPromptMessage + +UserMessage message body, representing a user's message. + +```python +class UserPromptMessage(PromptMessage): + """ + Model class for user prompt message. + """ + role: PromptMessageRole = PromptMessageRole.USER +``` + +### AssistantPromptMessage + +Represents a message returned by the model, typically used for `few-shots` or inputting chat history. + +```python +class AssistantPromptMessage(PromptMessage): + """ + Model class for assistant prompt message. + """ + class ToolCall(BaseModel): + """ + Model class for assistant prompt message tool call. + """ + class ToolCallFunction(BaseModel): + """ + Model class for assistant prompt message tool call function. + """ + name: str # tool name + arguments: str # tool arguments + + id: str # Tool ID, effective only in OpenAI tool calls. It's the unique ID for tool invocation and the same tool can be called multiple times. + type: str # default: function + function: ToolCallFunction # tool call information + + role: PromptMessageRole = PromptMessageRole.ASSISTANT + tool_calls: list[ToolCall] = [] # The result of tool invocation in response from the model (returned only when tools are input and the model deems it necessary to invoke a tool). +``` + +Where `tool_calls` are the list of `tool calls` returned by the model after invoking the model with the `tools` input. + +### SystemPromptMessage + +Represents system messages, usually used for setting system commands given to the model. + +```python +class SystemPromptMessage(PromptMessage): + """ + Model class for system prompt message. + """ + role: PromptMessageRole = PromptMessageRole.SYSTEM +``` + +### ToolPromptMessage + +Represents tool messages, used for conveying the results of a tool execution to the model for the next step of processing. + +```python +class ToolPromptMessage(PromptMessage): + """ + Model class for tool prompt message. + """ + role: PromptMessageRole = PromptMessageRole.TOOL + tool_call_id: str # Tool invocation ID. If OpenAI tool call is not supported, the name of the tool can also be inputted. +``` + +The base class's `content` takes in the results of tool execution. + +### PromptMessageTool + +```python +class PromptMessageTool(BaseModel): + """ + Model class for prompt message tool. + """ + name: str + description: str + parameters: dict +``` + +--- + +### LLMResult + +```python +class LLMResult(BaseModel): + """ + Model class for llm result. + """ + model: str # Actual used modele + prompt_messages: list[PromptMessage] # prompt messages + message: AssistantPromptMessage # response message + usage: LLMUsage # usage info + system_fingerprint: Optional[str] = None # request fingerprint, refer to OpenAI definition +``` + +### LLMResultChunkDelta + +In streaming returns, each iteration contains the `delta` entity. + +```python +class LLMResultChunkDelta(BaseModel): + """ + Model class for llm result chunk delta. + """ + index: int + message: AssistantPromptMessage # response message + usage: Optional[LLMUsage] = None # usage info + finish_reason: Optional[str] = None # finish reason, only the last one returns +``` + +### LLMResultChunk + +Each iteration entity in streaming returns. + +```python +class LLMResultChunk(BaseModel): + """ + Model class for llm result chunk. + """ + model: str # Actual used modele + prompt_messages: list[PromptMessage] # prompt messages + system_fingerprint: Optional[str] = None # request fingerprint, refer to OpenAI definition + delta: LLMResultChunkDelta +``` + +### LLMUsage + +```python +class LLMUsage(ModelUsage): + """ + Model class for LLM usage. + """ + prompt_tokens: int # Tokens used for prompt + prompt_unit_price: Decimal # Unit price for prompt + prompt_price_unit: Decimal # Price unit for prompt, i.e., the unit price based on how many tokens + prompt_price: Decimal # Cost for prompt + completion_tokens: int # Tokens used for response + completion_unit_price: Decimal # Unit price for response + completion_price_unit: Decimal # Price unit for response, i.e., the unit price based on how many tokens + completion_price: Decimal # Cost for response + total_tokens: int # Total number of tokens used + total_price: Decimal # Total cost + currency: str # Currency unit + latency: float # Request latency (s) +``` + +--- + +### TextEmbeddingResult + +```python +class TextEmbeddingResult(BaseModel): + """ + Model class for text embedding result. + """ + model: str # Actual model used + embeddings: list[list[float]] # List of embedding vectors, corresponding to the input texts list + usage: EmbeddingUsage # Usage information +``` + +### EmbeddingUsage + +```python +class EmbeddingUsage(ModelUsage): + """ + Model class for embedding usage. + """ + tokens: int # Number of tokens used + total_tokens: int # Total number of tokens used + unit_price: Decimal # Unit price + price_unit: Decimal # Price unit, i.e., the unit price based on how many tokens + total_price: Decimal # Total cost + currency: str # Currency unit + latency: float # Request latency (s) +``` + +--- + +### RerankResult + +```python +class RerankResult(BaseModel): + """ + Model class for rerank result. + """ + model: str # Actual model used + docs: list[RerankDocument] # Reranked document list +``` + +### RerankDocument + +```python +class RerankDocument(BaseModel): + """ + Model class for rerank document. + """ + index: int # original index + text: str + score: float +``` diff --git a/ja-jp/guides/model-configuration/load-balancing.mdx b/ja-jp/guides/model-configuration/load-balancing.mdx new file mode 100644 index 00000000..0be0aceb --- /dev/null +++ b/ja-jp/guides/model-configuration/load-balancing.mdx @@ -0,0 +1,38 @@ +--- +title: 負荷分散 +--- + + +モデルのレート制限(Rate limits)とは、モデルプロバイダーがユーザーまたは顧客に対し、指定された時間内にAPIサービスへアクセスする回数に対して設ける制限のことです。これにより、APIの乱用や誤用を防ぎ、すべてのユーザーが公平にAPIにアクセスできるようにし、インフラ全体の負荷を管理することができます。 + +企業レベルで大規模にモデルAPIを呼び出す際、高い同時リクエストがレート制限を超えてしまい、ユーザーのアクセスに影響を及ぼすことがあります。負荷分散は、複数のAPIエンドポイント間でAPIリクエストを分配することで、すべてのユーザーが最速の応答と最高のモデル呼び出しスループットを得られるようにし、ビジネスの安定した運用を保障します。 + +**モデルプロバイダー -- モデルリスト -- 負荷分散の設定** でこの機能を有効にし、同じモデルに複数の資格情報(APIキー)を追加することができます。 + +![モデルを負荷分散する](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/5caa857ac9c263d557e970cb33558b1b.png) + + +モデル負荷分散は有料機能です。[SaaS有料サービスのサブスクリプション](../../getting-started/cloud.md#ding-yue-ji-hua)または企業版の購入を通じてこの機能を有効にすることができます。 + + +デフォルト設定では、APIキーは初回設定時にモデルプロバイダーに追加された資格情報です。**設定の追加** をクリックして、同じモデルの異なるAPIキーを追加することで、負荷分散機能を正常に使用できます。 + +![負荷分散の設定](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/d8bfe128e70293392fcfa9df1505a2f4.png) + +**少なくとも1つの追加モデル資格情報**を追加することで、保存し負荷分散を有効にできます。 + +既に設定されている資格情報を**一時的に無効化**または**削除**することも可能です。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/642c0ff7edfdfe77fba43aa22cc3fa71.png) + +設定完了後、モデルリスト内にすべての有効な負荷分散モデルが表示されます。 + +![負荷分散の有効化](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/6123eff333b31660fe2f92b0c2e16365.png) + + +デフォルトでは、負荷分散はラウンドロビン戦略を使用します。レート制限を超えた場合、1分間のクールダウンタイムが適用されます。 + + +**モデルの追加**からも負荷分散を設定することができ、設定手順は上記と同じです。 + +![モデルの追加から負荷分散を設定](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/model-configuration/12970502b2e202d1f890dcecadf2dcbd.png) diff --git a/ja-jp/guides/model-configuration/new-provider.mdx b/ja-jp/guides/model-configuration/new-provider.mdx new file mode 100644 index 00000000..31cd4ff0 --- /dev/null +++ b/ja-jp/guides/model-configuration/new-provider.mdx @@ -0,0 +1,195 @@ +--- +title: 新しいプロバイダーの追加 +--- + + +### モデル設定方法 + +プロバイダーは三つのモデル設定方法に対応しています: + +**事前定義モデル(predefined-model)** + +ユーザーは統一されたプロバイダーのクレデンシャルを設定するだけで、プロバイダーの事前定義モデルを使用できます。 + +**カスタマイズ可能モデル(customizable-model)** + +ユーザーは各モデルのクレデンシャル設定を追加する必要があります。例えば、XinferenceはLLMとテキスト埋め込みの両方に対応していますが、各モデルには一意の**モデルUID**があり、両方を同時に接続したい場合は、それぞれのモデルに対して**モデルUID**を設定する必要があります。 + +**リモートから取得(fetch-from-remote)** + +`predefined-model`の設定方法と一致しており、統一されたプロバイダーのクレデンシャルを設定するだけで、モデルはクレデンシャル情報を通じてプロバイダーから取得されます。 + +例えばOpenAIの場合、gpt-turbo-3.5を基に複数のモデルを微調整することができ、それらはすべて同じ**APIキー**の下にあります。`fetch-from-remote`として設定すると、開発者は統一された**APIキー**を設定するだけで、Difyランタイムが開発者のすべての微調整モデルを取得してDifyに接続できます。 + +これら三つの設定方法は**共存可能**であり、例えばプロバイダーが`predefined-model`と`customizable-model`、または`predefined-model`と`fetch-from-remote`をサポートする場合があります。統一されたプロバイダーのクレデンシャルを設定することで、事前定義モデルとリモートから取得したモデルを使用でき、新しいモデルを追加することでカスタマイズ可能なモデルも使用できます。 + +### 設定説明 + +**名詞解説** + +* `モジュール`: 一つの`モジュール`は一つのPythonパッケージ、または簡単に言えば一つのフォルダーであり、その中に`__init__.py`ファイルと他の`.py`ファイルが含まれます。 + +**手順** + +新しいプロバイダーを追加するには主にいくつかのステップがあります。ここでは簡単に列挙し、具体的な手順は以下で詳しく説明します。 + +* プロバイダーのYAMLファイルを作成し、[プロバイダースキーマ](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/schema.md)に基づいて記述します。 +* プロバイダーのコードを作成し、`class`を実装します。 +* モデルタイプに応じて、プロバイダーの`モジュール`内に対応するモデルタイプの`モジュール`を作成します。例えば`llm`や`text_embedding`。 +* モデルタイプに応じて、対応するモデル`モジュール`内に同名のコードファイルを作成し、例えば`llm.py`、`class`を実装します。 +* 事前定義モデルがある場合、モデル名と同名のyamlファイルをモデル`モジュール`内に作成し、[AIモデルエンティティ](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/schema.md#aimodelentity)に基づいて記述します。 +* テストコードを記述し、機能の有用性を確認します。 + +#### 始めましょう + +新しいプロバイダーを追加するには、まずプロバイダーの英語識別子を決めます。例えば`anthropic`、この識別子を使って`model_providers`内に同名の`モジュール`を作成します。 + +この`モジュール`内で、まずプロバイダーのYAML設定を準備する必要があります。 + +**プロバイダーYAMLの準備** + +ここでは`Anthropic`を例に、プロバイダーの基本情報、対応するモデルタイプ、設定方法、クレデンシャルルールを設定します。 + +```YAML +provider: anthropic # プロバイダーの識別子 +label: # プロバイダーの表示名、en_US英語、zh_Hans中国語の二言語を設定できます。zh_Hansが設定されていない場合、en_USがデフォルトで使用されます。 + en_US: Anthropic +icon_small: # プロバイダーの小アイコン、対応するプロバイダーの実装ディレクトリ内の_assetsディレクトリに保存されます。labelと同じく二言語の設定が可能です。 + en_US: icon_s_en.png +icon_large: # プロバイダーの大アイコン、対応するプロバイダーの実装ディレクトリ内の_assetsディレクトリに保存されます。labelと同じく二言語の設定が可能です。 + en_US: icon_l_en.png +supported_model_types: # 対応するモデルタイプ、AnthropicはLLMのみ対応 +- llm +configurate_methods: # 対応する設定方法、Anthropicは事前定義モデルのみ対応 +- predefined-model +provider_credential_schema: # プロバイダーのクレデンシャルルール、Anthropicは事前定義モデルのみ対応するため、統一されたプロバイダーのクレデンシャルルールを定義する必要があります + credential_form_schemas: # クレデンシャルフォーム項目リスト + - variable: anthropic_api_key # クレデンシャルパラメーターの変数名 + label: # 表示名 + en_US: API Key + type: secret-input # フォームタイプ、ここではsecret-inputは暗号化された情報入力フィールドを意味し、編集時にはマスクされた情報のみが表示されます。 + required: true # 必須かどうか + placeholder: # プレースホルダー情報 + zh_Hans: 在此输入你的 API Key + en_US: Enter your API Key + - variable: anthropic_api_url + label: + en_US: API URL + type: text-input # フォームタイプ、ここではtext-inputはテキスト入力フィールドを意味します + required: false + placeholder: + zh_Hans: 在此输入你的 API URL + en_US: Enter your API URL +``` + +カスタマイズ可能なモデルを提供するプロバイダー、例えば`OpenAI`が微調整モデルを提供する場合、[`モデルクレデンシャルスキーマ`](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/schema.md)を追加する必要があります。以下は`OpenAI`を例にしたものです: + +```yaml +model_credential_schema: + model: # 微調整モデルの名称 + label: + en_US: Model Name + zh_Hans: 模型名称 + placeholder: + en_US: Enter your model name + zh_Hans: 输入模型名称 + credential_form_schemas: + - variable: openai_api_key + label: + en_US: API Key + type: secret-input + required: true + placeholder: + zh_Hans: 在此输入你的 API Key + en_US: Enter your API Key + - variable: openai_organization + label: + zh_Hans: 组织 ID + en_US: Organization + type: text-input + required: false + placeholder: + zh_Hans: 在此输入你的组织 ID + en_US: Enter your Organization ID + - variable: openai_api_base + label: + zh_Hans: API Base + en_US: API Base + type: text-input + required: false + placeholder: + zh_Hans: 在此输入你的 API Base + en_US: Enter your API Base +``` + +`model_providers`ディレクトリ内の他のプロバイダーディレクトリの[YAML設定情報](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/schema.md)も参考にできます。 + +**プロバイダーコードの実装** + +`model_providers`内に同名のPythonファイルを作成します。例えば`anthropic.py`を作成し、`class`を実装、`__base.provider.Provider`基クラスを継承します。例えば`AnthropicProvider`。 + +**カスタマイズ可能モデルプロバイダー** + +プロバイダーがXinferenceなどのカスタマイズ可能モデルプロバイダーの場合、このステップをスキップし、空の`XinferenceProvider`クラスを作成し、空の`validate_provider_credentials`メソッドを実装するだけで済みます。このメソッドは実際には使用されず、抽象クラスのインスタンス化を避けるためにのみ存在します。 + +```python +class XinferenceProvider(Provider): + def validate_provider_credentials(self, credentials: dict) -> None: + pass +``` + +**事前定義モデルプロバイダー** + +プロバイダーは`__base.model_provider.ModelProvider`基クラスを継承し、`validate_provider_credentials`プロバイダーの統一クレデンシャル検証メソッドを実装するだけで済みます。[AnthropicProvider](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/model_providers/anthropic/anthropic.py)を参考にできます。 + +```python +def validate_provider_credentials(self, credentials: dict) -> None: + """ + Validate provider credentials + You can choose any validate_credentials method of model type or implement validate method by yourself, + such as: get model list api + + if validate failed, raise exception + + :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. + """ +``` + +もちろん、`validate_provider_credentials`の実装を先に予約し、モデルクレデンシャル検証メソッドの実装後に直接再利用することもできます。 + +**モデルの追加** + +[**事前定義モデルの追加**](https://docs.dify.ai/v/ja-jp/guides/model-configuration/predefined-model)**👈🏻** + +事前定義モデルの場合、単純にyamlを定義し、呼び出しコードを実装することで接続できます。 + +[**カスタマイズ可能モデルの追加**](https://docs.dify.ai/v/ja-jp/guides/model-configuration/customizable-model) **👈🏻** + +カスタマイズ可能モデルの場合、呼び出しコードを実装するだけで接続できますが、処理するパラメーターはさらに複雑になる可能性があります。 + +*** + +#### テスト + +プロバイダー/モデルの有用性を確保するため、実装した各メソッドには`tests`ディレクトリ内で対応する統合テストコードを記述する必要があります。 + +再び`Anthropic`を例にします。 + +テストコードを記述する前に、`.env.example`にテストプロバイダーが必要とするクレデンシャル環境変数を追加します。例えば:`ANTHROPIC_API_KEY`。 + +実行前に`.env.example`をコピーして`.env`にし、実行します。 + +**テストコードの記述** + +`tests`ディレクトリ内にプロバイダーと同名の`モジュール`を作成します:`anthropic`。このモジュール内に`test_provider.py`および対応するモデルタイプのテストpyファイルを作成します。以下のようになります: + +```shell +. +├── __init__.py +├── anthropic +│   ├── __init__.py +│   ├── test_llm.py # LLMテスト +│   └── test_provider.py # プロバイダーテスト +``` + +上記で実装したコードの様々な状況に対してテストコードを記述し、テストを通過した後にコードを提出します。 diff --git a/ja-jp/guides/model-configuration/predefined-model.mdx b/ja-jp/guides/model-configuration/predefined-model.mdx new file mode 100644 index 00000000..d4b1ac95 --- /dev/null +++ b/ja-jp/guides/model-configuration/predefined-model.mdx @@ -0,0 +1,200 @@ +--- +title: 事前定義されたモデルの追加 +--- + + +プロバイダー統合完了後、次にプロバイダーへのモデルの接続を行います。 + +まず、接続するモデルのタイプを決定し、対応するプロバイダーのディレクトリ内に対応するモデルタイプの`module`を作成する必要があります。 + +現在サポートされているモデルタイプは以下の通りです: + +* `LLM` テキスト生成モデル +* `text_embedding` テキスト埋め込みモデル +* `rerank` ランク付けモデル +* `speech2text` 音声からテキストへの変換モデル +* `TTS` テキストから音声への変換モデル +* `moderation` 審査 + +ここでは`Anthropic`を例に挙げると、`Anthropic`はLLMのみをサポートしているため、`model_providers.anthropic`に`llm`という名前の`module`を作成します。 + +事前に定義されたモデルについては、`llm` `module` の下に、モデル名をファイル名とするYAMLファイルを作成する必要があります、例えば、`claude-2.1.yaml`。 + +#### モデルのYAMLファイルのサンプル + +```yaml +model: claude-2.1 # モデル識別子 +# モデル表示名。en_US英語、zh_Hans中国語の二つの言語を設定できます。zh_Hansが設定されていない場合、デフォルトでen_USが使用されます。 +# ラベルを設定しない場合、モデル識別子が使用されます。 +label: + en_US: claude-2.1 +model_type: llm # モデルタイプ、claude-2.1はLLMです +features: # サポートする機能、agent-thoughtはエージェント推論、visionは画像理解をサポート +- agent-thought +model_properties: # モデルプロパティ + mode: chat # LLMモード、completeはテキスト補完モデル、chatは対話モデル + context_size: 200000 # 最大コンテキストサイズ +parameter_rules: # モデル呼び出しパラメータルール、LLMのみ提供が必要 +- name: temperature # 呼び出しパラメータ変数名 + # デフォルトで5つの変数内容設定テンプレートが用意されています。temperature/top_p/max_tokens/presence_penalty/frequency_penalty + # use_template内でテンプレート変数名を設定すると、entities.defaults.PARAMETER_RULE_TEMPLATE内のデフォルト設定が使用されます + # 追加の設定パラメータを設定した場合、デフォルト設定を上書きします + use_template: temperature +- name: top_p + use_template: top_p +- name: top_k + label: # 呼び出しパラメータ表示名 + zh_Hans: 取样数量 + en_US: Top k + type: int # パラメータタイプ、float/int/string/booleanがサポートされています + help: # ヘルプ情報、パラメータの作用を説明 + zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 + en_US: Only sample from the top K options for each subsequent token. + required: false # 必須かどうか、設定しない場合もあります +- name: max_tokens_to_sample + use_template: max_tokens + default: 4096 # パラメータデフォルト値 + min: 1 # パラメータ最小値、float/intのみ使用可能 + max: 4096 # パラメータ最大値、float/intのみ使用可能 +pricing: # 価格情報 + input: '8.00' # 入力単価、つまりプロンプト単価 + output: '24.00' # 出力単価、つまり返答内容単価 + unit: '0.000001' # 価格単位、上記価格は100Kあたりの単価 + currency: USD # 価格通貨 +``` + +すべてのモデル構成が完了した後に、モデルコードの実装を開始することをお勧めします。 + +同様に、`model_providers`ディレクトリ内の他のサプライヤーの対応するモデル タイプ ディレクトリにあるYAML構成情報を参照することもできます。全てのYAMLルールについては、「Schema[^1]」をご覧ください。 + +#### モデル呼び出しコードの実装 + +次に、`llm` `module`内に同名のPythonファイル`llm.py`を作成し、コード実装を行います。 + +`llm.py`内にAnthropic LLMクラスを作成し、`AnthropicLargeLanguageModel`(任意な名前)という名前を付けます。このクラスは`__base.large_language_model.LargeLanguageModel`基底クラスを継承し、以下のメソッドを実装します: + +* LLM呼び出し + + LLM呼び出しの中核メソッドを実装し、ストリーミングと同期返り値の両方をサポートするメソッドを実装します。 + + ```python + def _invoke(self, model: str, credentials: dict, + prompt_messages: list[PromptMessage], model_parameters: dict, + tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, + stream: bool = True, user: Optional[str] = None) \ + -> Union[LLMResult, Generator]: + """ + Invoke large language model + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param model_parameters: model parameters + :param tools: tools for tool calling + :param stop: stop words + :param stream: is stream response + :param user: unique user id + :return: full response or stream response chunk generator result + """ + ``` + + 実装時には、同期返答とストリーミング返答を処理するために2つの関数を使用する必要があります。Pythonは`yield`キーワードを含む関数をジェネレータ関数として認識し、返されるデータタイプが固定されるため、同期返答とストリーミング返答を別々に実装する必要があります。以下のように(以下の例では簡略化されたパラメータを使用していますが、実際の実装では上記のパラメータリストに従う必要があります): + + ```python + def _invoke(self, stream: bool, **kwargs) \ + -> Union[LLMResult, Generator]: + if stream: + return self._handle_stream_response(**kwargs) + return self._handle_sync_response(**kwargs) + + def _handle_stream_response(self, **kwargs) -> Generator: + for chunk in response: + yield chunk + def _handle_sync_response(self, **kwargs) -> LLMResult: + return LLMResult(**response) + ``` +* 事前計算入力トークン + + モデルが事前計算トークンインターフェースを提供していない場合は、0を返しても構いません。 + + ```python + def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], + tools: Optional[list[PromptMessageTool]] = None) -> int: + """ + Get number of tokens for given prompt messages + + :param model: model name + :param credentials: model credentials + :param prompt_messages: prompt messages + :param tools: tools for tool calling + :return: + """ + ``` +* モデル認証情報検証 + + プロバイダーの認証情報検証と同様に、ここでは個別のモデルに対して検証を行います。 + + ```python + def validate_credentials(self, model: str, credentials: dict) -> None: + """ + Validate model credentials + + :param model: model name + :param credentials: model credentials + :return: + """ + ``` +* 呼び出し異常エラーのマッピングテーブル + + モデル呼び出し異常時に、Runtime時に指定の`InvokeError`タイプにマッピングする必要があります。これにより、Difyは異なるエラーに対して異なる後続処理を行うことができます。 + + ランタイムエラー(Runtime Errors): + + * `InvokeConnectionError` 呼び出し接続エラー + * `InvokeServerUnavailableError` 呼び出しサーバー利用不可エラー + * `InvokeRateLimitError` 呼び出しレート制限エラー + * `InvokeAuthorizationError` 認証エラー + * `InvokeBadRequestError` 呼び出し不正リクエストエラー + + ```python + @property + def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: + """ + Map model invoke error to unified error + The key is the error type thrown to the caller + The value is the error type thrown by the model, + which needs to be converted into a unified error type for the caller. + + :return: Invoke error mapping + """ + ``` + +インターフェースメソッドの説明については:[Interfaces](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/docs/en_US/interfaces.md)をご覧ください。具体的な実装については:[llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py)を参照してください。 + +[^1]: #### プロバイダー + + * `provider` (string) プロバイダー識別子、例:`openai` + * `label` (object) プロバイダー表示名、i18n対応、`en_US`英語、`zh_Hans`中国語の二つの言語を設定可能 + * `zh_Hans` (string) \[optional] 中国語ラベル名、`zh_Hans`が設定されていない場合、デフォルトで`en_US`が使用されます。 + * `en_US` (string) 英語ラベル名 + * `description` (object) \[optional] プロバイダー説明、i18n対応 + * `zh_Hans` (string) \[optional] 中国語説明 + * `en_US` (string) 英語説明 + * `icon_small` (string) \[optional] プロバイダー小アイコン、対応するプロバイダー実装ディレクトリ内の`_assets`ディレクトリに保存、中英同様のポリシー + * `zh_Hans` (string) \[optional] 中国語アイコン + * `en_US` (string) 英語アイコン + * `icon_large` (string) \[optional] プロバイダー大アイコン、対応するプロバイダー実装ディレクトリ内の\_assetsディレクトリに保存、中英同様のポリシー + * `zh_Hans` (string) \[optional] 中国語アイコン + * `en_US` (string) 英語アイコン + * `background` (string) \[optional] 背景色の色値、例:#FFFFFF、空白の場合はデフォルトの色が表示されます。 + * `help` (object) \[optional] ヘルプ情報 + * `title` (object) ヘルプタイトル、i18n対応 + * `zh_Hans` (string) \[optional] 中国語タイトル + * `en_US` (string) 英語タイトル + * `url` (object) ヘルプリンク、i18n対応 + * `zh_Hans` (string) \[optional] 中国語リンク + * `en_US` (string) 英語リンク + * `supported_model_types` (array\[ModelType]) 対応モデルタイプ + * `configurate_methods` (array\[ConfigurateMethod]) 設定方法 + * `provider_credential_schema` (ProviderCredentialSchema) プロバイダー認証情報スキーマ + * `model_credential_schema` (ModelCredentialSchema) モデル認証情報スキーマ \ No newline at end of file diff --git a/ja-jp/guides/model-configuration/schema.mdx b/ja-jp/guides/model-configuration/schema.mdx new file mode 100644 index 00000000..7c7c3991 --- /dev/null +++ b/ja-jp/guides/model-configuration/schema.mdx @@ -0,0 +1,212 @@ +--- +title: 設定ルール +version: 'v1.0' +--- + +- 供給業者のルールは [Provider](#Provider) エンティティに基づいています。 + +- モデルのルールは [AIModelEntity](#AIModelEntity) エンティティに基づいています。 + +> 以下のすべてのエンティティは `Pydantic BaseModel` に基づいており、対応するエンティティは `entities` モジュールで見つけることができます。 + +### Provider + +- `provider` (string) 供給業者の識別子、例:`openai` +- `label` (object) 供給業者の表示名、i18n、`en_US` 英語、`zh_Hans` 中国語の2種類の言語を設定できます + - `zh_Hans` (string) [optional] 中国語のラベル名、`zh_Hans` が設定されていない場合はデフォルトで `en_US` が使用されます。 + - `en_US` (string) 英語のラベル名 +- `description` (object) [optional] 供給業者の説明、i18n + - `zh_Hans` (string) [optional] 中国語の説明 + - `en_US` (string) 英語の説明 +- `icon_small` (string) [optional] 供給業者の小さなアイコン、対応する供給業者の実装ディレクトリ内の `_assets` ディレクトリに保存されています。中国語と英語の方針は `label` と同じです。 + - `zh_Hans` (string) [optional] 中国語のアイコン + - `en_US` (string) 英語のアイコン +- `icon_large` (string) [optional] 供給業者の大きなアイコン、対応する供給業者の実装ディレクトリ内の `_assets` ディレクトリに保存されています。中国語と英語の方針は `label` と同じです。 + - `zh_Hans` (string) [optional] 中国語のアイコン + - `en_US` (string) 英語のアイコン +- `background` (string) [optional] 背景色の値、例:#FFFFFF、空白の場合はフロントエンドのデフォルトの色が表示されます。 +- `help` (object) [optional] ヘルプ情報 + - `title` (object) ヘルプのタイトル、i18n + - `zh_Hans` (string) [optional] 中国語のタイトル + - `en_US` (string) 英語のタイトル + - `url` (object) ヘルプリンク、i18n + - `zh_Hans` (string) [optional] 中国語のリンク + - `en_US` (string) 英語のリンク +- `supported_model_types` (array[[ModelType](#ModelType)]) サポートされるモデルタイプ +- `configurate_methods` (array[[ConfigurateMethod](#ConfigurateMethod)]) 設定方法 +- `provider_credential_schema` ([ProviderCredentialSchema](#ProviderCredentialSchema)) 供給業者の資格情報スキーマ +- `model_credential_schema` ([ModelCredentialSchema](#ModelCredentialSchema)) モデルの資格情報スキーマ + +### AIModelEntity + +- `model` (string) モデルの識別子、例:`gpt-3.5-turbo` +- `label` (object) [optional] モデルの表示名、i18n、`en_US` 英語、`zh_Hans` 中国語の2種類の言語を設定できます + - `zh_Hans `(string) [optional] 中国語のラベル名 + - `en_US` (string) 英語のラベル名 +- `model_type` ([ModelType](#ModelType)) モデルのタイプ +- `features` (array[[ModelFeature](#ModelFeature)]) [optional] サポートされる機能のリスト +- `model_properties` (object) モデルのプロパティ + - `mode` ([LLMMode](#LLMMode)) モード (モデルタイプ `llm` で使用可能) + - `context_size` (int) コンテキストサイズ (モデルタイプ `llm` `text-embedding` で使用可能) + - `max_chunks` (int) 最大チャンク数 (モデルタイプ `text-embedding` `moderation` で使用可能) + - `file_upload_limit` (int) ファイルの最大アップロード制限、単位:MB。(モデルタイプ `speech2text` で使用可能) + - `supported_file_extensions` (string) サポートされるファイルの拡張形式、例:mp3,mp4(モデルタイプ `speech2text` で使用可能) + - `default_voice` (string) デフォルトの音声、必須:alloy,echo,fable,onyx,nova,shimmer(モデルタイプ `tts` で使用可能) + - `voices` (list) 選択可能な音声のリスト。 + - `mode` (string) 音声モデル。(モデルタイプ `tts` で使用可能) + - `name` (string) 音声モデルの表示名。(モデルタイプ `tts` で使用可能) + - `language` (string) 音声モデルのサポート言語。(モデルタイプ `tts` で使用可能) + - `word_limit` (int) 一度に変換できる単語数の制限、デフォルトでは段落ごとに分割されます(モデルタイプ `tts` で使用可能) + - `audio_type` (string) サポートされるオーディオファイルの拡張形式、例:mp3,wav(モデルタイプ `tts` で使用可能) + - `max_workers` (int) テキストオーディオ変換の並行タスク数をサポート(モデルタイプ `tts` で使用可能) + - `max_characters_per_chunk` (int) チャンクあたりの最大文字数(モデルタイプ `moderation` で使用可能) +- `parameter_rules` (array[[ParameterRule](#ParameterRule)]) [optional] モデル呼び出しパラメータのルール +- `pricing` ([PriceConfig](#PriceConfig)) [optional] 価格情報 +- `deprecated` (bool) 廃止されていますか。廃止されると、モデルリストは表示されなくなりますが、すでに設定されているモデルは引き続き使用できます。デフォルトは False です。 + +### ModelType + +- `llm` テキスト生成モデル +- `text-embedding` テキスト埋め込みモデル +- `rerank` Rerank モデル +- `speech2text` 音声からテキストへ +- `tts` テキストから音声へ +- `moderation` モデレーション + +### ConfigurateMethod + +- `predefined-model` 事前定義モデル + + ユーザーは、供給業者ごとに統一された資格情報を設定するだけで、供給業者の事前定義モデルを使用できます。 + +- `customizable-model` カスタマイズ可能なモデル + + ユーザーは、各モデルの資格情報を設定することができます。 + +- `fetch-from-remote` リモートから取得 + + `predefined-model` の設定方法と同様に、統一されたベンダーの認証情報を設定すれば、モデルは認証情報を通じてベンダーから取得されます。 + +### ModelFeature + +- `agent-thought` エージェントの思考、一般的に70Bを超えると推論能力があります。 +- `vision` 視覚、例えば:画像理解。 +- `tool-call` ツールの呼び出し +- `multi-tool-call` 複数ツールの呼び出し +- `stream-tool-call` ストリーミングツール呼び出し + +### FetchFrom + +- `predefined-model` 予め定義されたモデル +- `fetch-from-remote` リモートモデル + +### LLMMode + +- `completion` テキスト補完 +- `chat` チャット + +### ParameterRule + +- `name` (string) モデル呼び出しの実際のパラメータ名 + +- `use_template` (string) [optional] テンプレートを使用 + + デフォルトで5種類の変数内容設定テンプレートが用意されています: + + - `temperature` + - `top_p` + - `frequency_penalty` + - `presence_penalty` + - `max_tokens` + + `use_template` にテンプレート変数名を直接設定することで、`entities.defaults.PARAMETER_RULE_TEMPLATE` に基づくデフォルト設定が使用されます。 + `name` と `use_template` 以外のすべてのパラメータを設定する必要はありません。追加の設定パラメータを設定した場合、デフォルト設定が上書きされます。 + `openai/llm/gpt-3.5-turbo.yaml`を参照してください。 + +- `label` (object) [optional] ラベル,i18n + + - `zh_Hans`(string) [optional] 中国語ラベル名 + - `en_US` (string) 英語ラベル名 + +- `type`(string) [optional] パラメータタイプ + + - `int` 整数 + - `float` 浮動小数点 + - `string` 文字列 + - `boolean` ブール型 + +- `help` (string) [optional] ヘルプ情報 + + - `zh_Hans` (string) [optional] 中国語ヘルプ情報 + - `en_US` (string) 英語ヘルプ情報 + +- `required` (bool) 必須かどうか、デフォルトは False。 + +- `default`(int/float/string/bool) [optional] デフォルト値 + +- `min`(int/float) [optional] 最小値、数値型のみ適用 + +- `max`(int/float) [optional] 最大値、数値型のみ適用 + +- `precision`(int) [optional] 精度、小数点以下の桁数を保持、数値型のみ適用 + +- `options` (array[string]) [optional] ドロップダウン選択肢、`type` が `string` の場合にのみ適用、設定しないか null の場合は選択肢に制限はありません。 + +### PriceConfig + +- `input` (float) 入力単価、すなわちプロンプト単価 +- `output` (float) 出力単価、すなわち返却内容単価 +- `unit` (float) 価格単位、例えば1Mトークン単位で計算する場合、単価に対応する単位トークン数は `0.000001` +- `currency` (string) 通貨単位 + +### ProviderCredentialSchema + +- `credential_form_schemas` (array[[CredentialFormSchema](#CredentialFormSchema)]) 認証情報フォーム規範 + +### ModelCredentialSchema + +- `model` (object) モデル識別子、変数名はデフォルトで `model` + - `label` (object) モデルフォーム項目の表示名 + - `en_US` (string) 英語 + - `zh_Hans` (string) [optional] 中国語 + - `placeholder` (object) モデルのヒント内容 + - `en_US` (string) 英語 + - `zh_Hans` (string) [optional] 中国語 +- `credential_form_schemas` (array[[CredentialFormSchema](#CredentialFormSchema)]) 認証情報フォーム規範 + +### CredentialFormSchema + +- `variable` (string) フォーム項目の変数名 +- `label` (object) フォーム項目のラベル + - `en_US` (string) 英語のラベル + - `zh_Hans` (string) [optional] 中国語のラベル +- `type` ([FormType](#FormType)) フォーム項目の種類 +- `required` (bool) この項目が必須かどうか +- `default` (string) デフォルト値 +- `options` (array[[FormOption](#FormOption)]) フォーム項目が `select` または `radio` の場合に使用するドロップダウンの選択肢を定義 +- `placeholder` (object) フォーム項目が `text-input` の場合にのみ使用するプロパティ、入力フィールドに表示されるヒント + - `en_US` (string) 英語のプレースホルダー + - `zh_Hans` (string) [optional] 中国語のプレースホルダー +- `max_length` (int) フォーム項目が `text-input` の場合に使用するプロパティ、入力可能な最大文字数を定義。0 は制限なしを意味する。 +- `show_on` (array[[FormShowOnObject](#FormShowOnObject)]) 他のフォーム項目の値が条件に一致する場合に表示される。空の場合は常に表示される。 + +### FormType + +- `text-input` テキスト入力コンポーネント +- `secret-input` パスワード入力コンポーネント +- `select` 単一選択ドロップダウン +- `radio` ラジオボタンコンポーネント +- `switch` スイッチコンポーネント、`true` と `false` のみをサポート + +### FormOption + +- `label` (object) ラベル + - `en_US` (string) 英語のラベル + - `zh_Hans` (string) [optional] 中国語のラベル +- `value` (string) ドロップダウンの選択肢の値 +- `show_on` (array[[FormShowOnObject](#FormShowOnObject)]) 他のフォーム項目の値が条件に一致する場合に表示される。空の場合は常に表示される。 + +### FormShowOnObject + +- `variable` (string) 他のフォーム項目の変数名 +- `value` (string) 他のフォーム項目の変数値 \ No newline at end of file diff --git a/ja-jp/guides/monitoring/README.mdx b/ja-jp/guides/monitoring/README.mdx new file mode 100644 index 00000000..8fea0df1 --- /dev/null +++ b/ja-jp/guides/monitoring/README.mdx @@ -0,0 +1,8 @@ +--- +title: モニタリング +--- + + +**概要** で本番環境におけるアプリケーションのパフォーマンスをモニタリングし、データ分析ダッシュボードで本番環境におけるアプリケーションの使用コスト、レイテンシ、ユーザーフィードバック、パフォーマンスなどの指標を分析します。継続デバッグおよびイテレーションを通じてアプリケーションを絶えず改善します。 + +![概要](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/45c8580f167e27c9e26784b5426987d0.png) \ No newline at end of file diff --git a/ja-jp/guides/monitoring/analysis.mdx b/ja-jp/guides/monitoring/analysis.mdx new file mode 100644 index 00000000..5df5b363 --- /dev/null +++ b/ja-jp/guides/monitoring/analysis.mdx @@ -0,0 +1,37 @@ +--- +title: データ分析 +--- + +**監視 — 分析** では、使用量、アクティブユーザー数、大規模言語モデル (LLM) のコール消費などを表示します。これにより、アプリケーションの運営効果、活性度、経済性を継続的に改善できます。さらに多くの有用な可視化能力を段階的に提供していきますので、ぜひご要望をお知らせください。 + +![監視 — 分析](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/273fbe372440ad8da870e6524854fa97.png) + +*** + +**合計メッセージ数(Total Messages)** + +AIとの毎日会話した総回数を反映します。ユーザーの質問に AI が回答するごとに1回と数える。プロンプトエンジニアリング(prompt engineering)とデバッグの会話は含まれません。 + +**活躍ユーザー数(Active Users)** + +AI と有効にインタラクションしたユニークユーザー数を表します。少なくとも一問一答以上のやり取りをしたユーザーが含まれます。プロンプトエンジニアリング(prompt engineering)とデバッグの会話は含まれません。 + +**平均会話インタラクション数(Average Session Interactions)** + +各会話ユーザーの継続的なコミュニケーション回数を反映します。ユーザーが AI と 10 ラウンドの質問と回答を行った場合、その数値は 10 になります。この指標はユーザーの粘着性を反映します。対話型アプリケーションでのみ提供されます。 + +**トークン出力速度(Token Output Speed)** + +毎秒のトークン出力数を示し、モデルの生成速度およびアプリケーションの使用頻度を間接的に反映します。 + +**ユーザー満足度(User Satisfaction Rate)** + +1000 メッセージごとの「いいね」数を示します。ユーザーが回答に非常に満足している割合を反映します。 + +**トークン消費数(Token Usage)** + +そのアプリケーションが毎日言語モデルにリクエストしたトークンの消費量を反映し、コスト管理に役立ちます。 + +**合計会話数 (Total Conversation)** + +毎日のAI会話数。数え方は:会話を1回と数える、毎回の会話には複数のメッセージ交換できます。プロンプトエンジニアリング(prompt engineering)とデバッグの会話は含まれません。 diff --git a/ja-jp/guides/monitoring/integrate-external-ops-tools/README.mdx b/ja-jp/guides/monitoring/integrate-external-ops-tools/README.mdx new file mode 100644 index 00000000..8f3a29d8 --- /dev/null +++ b/ja-jp/guides/monitoring/integrate-external-ops-tools/README.mdx @@ -0,0 +1,6 @@ +--- +title: 外部のOpsツールを統合する +--- + + +🚧 メンテナンス中です \ No newline at end of file diff --git a/ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.mdx b/ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.mdx new file mode 100644 index 00000000..d5d954bb --- /dev/null +++ b/ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.mdx @@ -0,0 +1,624 @@ +--- +title: LangFuseとの統合 +--- + + +### Langfuseとは + +LangfuseはLLMアプリケーションの開発者がデバッグ、分析、イテレーション等を使用してアプリケーションのパフォーマンスを向上させるためのツールです。 + + +Langfuseの公式サイト:[https://langfuse.com/](https://langfuse.com/) + + +*** + +### Langfuseの使い方 + +1. Langfuseの[公式サイト](https://langfuse.com/)から登録し、ログインする。 +2. Langfuseからプロジェクトを作成します +ログイン後、ホームページの **New** をクリックし、新たな**プロジェクト**を作成します。このプロジェクトは、Dify内の**アプリ**と連動したデータモニタリングに使用されます。 + +
+ + +
+ +*** + +### 監視データのリスト + +#### ワークフローとチャットフローの情報を追跡 + +**ワークフローとチャットフローの追跡** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
WorkflowLangFuse Trace
workflow\_app\_log\_id/workflow\_run\_idid
user\_session\_iduser\_id
workflow\_{id}name
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
Model token consumptionusage
metadatametadata
errorlevel
errorstatus\_message
\[workflow]tags
\["message", conversation\_mode]session\_id
conversion\_idparent\_observation\_id
+**Workflow Trace Info** + +- workflow\_id - ワークフローのユニークID +- conversation\_id - 会話ID +- workflow\_run\_id - このランタイムのワークフローID +- tenant\_id - テナントID +- elapsed\_time - このランタイムの経過時間 +- status - ランタイムのステータス +- version - ワークフローのバージョン +- total\_tokens - このランタイムで使用されたトークンの合計 +- file\_list - 処理されたファイルのリスト +- triggered\_from - このランタイムをトリガしたソース +- workflow\_run\_inputs - このワークフローの入力 +- workflow\_run\_outputs - このワークフローの出力 +- error - エラーメッセージ +- query - ランタイムで使用されるクエリ +- workflow\_app\_log\_id - ワークフローアプリケーションログID +- message\_id - 関連するメッセージID +- start\_time - このランタイムの開始時刻 +- end\_time - このランタイムの終了時刻 +- workflow node executions - ワークフローノードのランタイム情報 +- Metadata + - workflow\_id - ワークフローのユニークID + - conversation\_id - 会話ID + - workflow\_run\_id - このランタイムのワークフローID + - tenant\_id - テナントID + - elapsed\_time - このランタイムの経過時間 + - status - 運用状態 + - version - ワークフローのバージョン + - total\_tokens - このランタイムで使用されたトークンの合計 + - file\_list - 処理されたファイルのリスト + - triggered\_from - このランタイムをトリガしたソース + +#### Message Trace 情報 + +**LLM会話を追跡するため** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MessageLangFuse Generation/Trace
message\_idid
user\_session\_iduser\_id
message\_{id}name
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
Model token consumptionusage
metadatametadata
errorlevel
errorstatus\_message
\["message", conversation\_mode]tags
conversation\_idsession\_id
conversion\_idparent\_observation\_id
+**Message Trace Info** + +- message\_id - メッセージID +- message\_data - メッセージデータ +- user\_session\_id - ユーザーのセッションID +- conversation\_model - 会話モデル +- message\_tokens - メッセージトークン +- answer\_tokens - 回答トークン +- total\_tokens - メッセージと回答のトータルトークン +- error - エラーメッセージ +- inputs - 入力データ +- outputs - 出力データ +- file\_list - 処理されたファイルのリスト +- start\_time - 開始時刻 +- end\_time - 終了時刻 +- message\_file\_data - 関連ファイルデータのメッセージ +- conversation\_mode - 会話モード +- Metadata + - conversation\_id - 会話ID + - ls\_provider - モデルプロバイダ + - ls\_model\_name - モデルID + - status - メッセージステータス + - from\_end\_user\_id - 送信ユーザーID + - from\_account\_id - 送信アカウントID + - agent\_based - エージェントベースか + - workflow\_run\_id - このランタイムのワークフローID + - from\_source - メッセージのソース + - message\_id - メッセージID + +#### Moderation Trace 情報 + +**会話モデレーションを追跡するため** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModerationLangFuse Generation/Trace
user\_iduser\_id
moderationname
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
metadatametadata
\[moderation]tags
message\_idparent\_observation\_id
+**Message Trace Info** + +- message\_id - メッセージID +- user\_id - ユーザーID +- workflow\_app\_log\_id - ワークフローアプリケーションログID +- inputs - レビュー用の入力データ +- message\_data - メッセージデータ +- flagged - 注目対象としてフラグが立てられているか +- action - 実行すべき具体的なアクション +- preset\_response - 事前設定の応答 +- start\_time - レビューの開始時刻 +- end\_time - レビューの終了時刻 +- Metadata + - message\_id - メッセージID + - action - 実行すべき具体的なアクション + - preset\_response - 事前設定の応答 + +#### 提案された質問トレース情報 + +**提案された質問を追跡するため** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Suggested QuestionLangFuse Generation/Trace
user\_iduser\_id
suggested\_questionname
start\_timestart\_time
end\_timeend\_time
inputsinput
outputsoutput
metadatametadata
\[suggested\_question]tags
message\_idparent\_observation\_id
+**Message Trace Info** + +* message\_id - メッセージID +* message\_data - メッセージデータ +* inputs - 入力データ +* outputs - 出力データ +* start\_time - 開始時刻 +* end\_time - 終了時刻 +* total\_tokens - トータルトークン +* status - メッセージの状態 +* error - エラーメッセージ +* from_account_id - 送信元アカウントのID +* agent_based - エージェントベースであるかどうか +* from_source - メッセージの発信元 +* model_provider - モデルの提供者 +* model_id - モデルのID +* suggested_question - 提案された質問 +* level - ステータスのレベル +* status_message - ステータスメッセージ +* Metadata + * message_id - メッセージのID + * ls_provider - モデルの提供者 + * ls_model_name - モデルの名前 + * status - メッセージの状態 + * from_end_user_id - 送信元ユーザーのID + * from_account_id - 送信元アカウントのID + * workflow_run_id - このランタイムにおけるワークフローのID + * from_source - メッセージの発信元 + +#### Dataset Retrieval Trace 情報 + +**ナレッジベースの取得を追跡するために使用** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
データセットの取得LangFuse生成/トレース
user_iduser_id
dataset_retrievalname
start_timestart_time
end_timeend_time
inputsinput
outputsoutput
metadatametadata
\[dataset_retrieval]tags
message_idparent_observation_id
+**Dataset Retrieval Trace Info** + +* message_id - メッセージのID +* inputs - 入力メッセージ +* documents - ドキュメントデータ +* start_time - 開始時間 +* end_time - 終了時間 +* message_data - 消息数据 +* Metadata + * message_id - メッセージのID + * ls_provider - モデルの提供者 + * ls_model_name - モデルの名前 + * status - モデルの状態 + * from_end_user_id - 送信元ユーザーのID + * from_account_id - 送信元アカウントのID + * agent_based - エージェントベースであるかどうか + * workflow_run_id - このランタイムにおけるワークフローのID + * from_source - メッセージの発信元 + +#### Tool Trace 情報 + +**ツールの呼び出しを追跡するために使用** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ツールLangFuse生成/トレース
user_iduser_id
tool_namename
start_timestart_time
end_timeend_time
inputsinput
outputsoutput
metadatametadata
\["tool", tool_name]tags
message_idparent_observation_id
+**Tool Trace Info** + +* message_id - メッセージのID +* tool_name - ツール名 +* start_time - 開始時間 +* end_time - 終了時間 +* tool_inputs - ツールの入力 +* tool_outputs - ツールの出力 +* error - エラーメッセージ(存在する場合) +* inputs - メッセージの入力 +* outputs - メッセージの出力 +* tool_config - ツールの構成 +* tool_parameters - ツールのパラメータ +* file_url - 関連ファイルのURL +* Metadata + * message_id - メッセージのID + * tool_name - ツール名 + * tool_inputs - ツールの入力 + * tool_outputs - ツールの出力 + * tool_config - ツールの構成 + * error - エラーメッセージ + * tool_parameters - ツールのパラメータ + * message_file_id - メッセージファイルのID + * created_by_role - 作成者の役割 + * created_user_id - 作成ユーザーのID + +#### Generate Name Trace 情報 + +**会話タイトル生成の追跡に使用** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Generate NameLangFuse Generation/Trace
user_iduser_id
generate_namename
start_timestart_time
end_timeend_time
inputsinput
outputsoutput
metadatametadata
\[generate_name]tags
+**Generate Name Trace Info** + +* conversation_id - 会話のID +* inputs - 入力データ +* outputs - 生成されたセッション名 +* start_time - 開始時間 +* end_time - 終了時間 +* tenant\_id - テナントID +* Metadata + * conversation_id - 会話のID + * tenant\_id - テナントID diff --git a/ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.mdx b/ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.mdx new file mode 100644 index 00000000..a2909009 --- /dev/null +++ b/ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.mdx @@ -0,0 +1,344 @@ +--- +title: LangSmithの統合 +--- + + +### 1 LangSmithとは + +LangSmithはLLMアプリケーションの開発、コラボレーション、テスト、デプロイ、監視などのツールを提供するプラットフォームです。 + + +LangSmithの公式サイト:[https://www.langchain.com/langsmith](https://www.langchain.com/langsmith) + + +*** + +### 2 LangSmithの使い方 + +#### 1. LangSmithの[公式サイト](https://www.langchain.com/langsmith)から登録し、ログインする。 + +#### 2. LangSmithからプロジェクトを作成します + +ログイン後、ホームページの **New Project** をクリックし、新たな**プロジェクト**を作成します。このプロジェクトは、Dify内の**アプリ**と連動したデータモニタリングに使用されます。 + +![新たなプロジェクトを作成します。](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/58e20105fcc0771ca2431e8e5dcc42d3.png) + +作成する後、プロジェクトの中にチェクできます。 + +![LangSmithの中にプロジェクトをチェクします。](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/642c0ff7edfdfe77fba43aa22cc3fa71.png) + +#### 3. プロジェクト認証情報の作成 + +左のサイドバーでプロジェクト **設定** を見つける。 + +![プロジェクトを設定し](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/c49a1fc769215193928ff0d880422f89.png) + +**Create API Key**をクリックし,新たな認証情報を作ります。 + +![プロジェクトのAPI Keyを作ります。](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/7082286b0d12af4bc0c84d9a3acf8b1b.png) + +**Personal Access Token** を選び,のちほとのAPI身分証明の時使えます。 + +![Personal Access Tokenを選択します](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/75a69bd4dd02f0ffc0313589ae12fb36.png) + +新たなAPI keyをコピーし、保存します。 + +![新たなAPI keyをコピーします](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/723e96a13e8f722d6df714b11ffd0bb1.png) + +#### 4. Dify アプリの中に LangSmith を設定します + +監視用のアプリのサイトメニューの**監視**ボタンをクリックし,**設定**をクリックします。 + +![LangSmithを設定します](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/b6c7e5d4c2ca2092d59465cca27bc69c.png) + +それから,LangSmith から作った **API Key** と**プロジェクト名**を**設定**の中に貼り付け、保存します。 + +![ LangSmithを設定します。](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/93dfabcadb7b2ff597f54beb5e642124.png) + + +設定したプロジェクト名は LangSmith のいるプロジェクト名と必ず一致します。一致しない場合、データの同期時に LangSmith は自動的に新しいプロジェクトを作成します。 + + +保存に成功すると、現在のページで監視状態を見ることができます。 + +![監視状態を見る](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/43369dc4de8f606c166fae2efab97d73.png) + +### LangSmithでのモニタリングデータの表示 + +Dify内のアプリケーションからデバッグや製品データを設定することで、LangSmithにてそのデータをモニタリングすることができます。 + +![Difyにおけるアプリケーションのデバッグ](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/a1370fdbb79257cba31a565ac6764802.png) + +LangSmithに切り替えると、ダッシュボード上でDifyアプリケーションの詳細な操作ログを見ることができます。 + +![LangSmithでのアプリケーションデータの表示](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/2833b2ffa20927b5328e9624b065beea.png) + +LangSmithを通じて得られる詳細な大規模言語モデル(LLM)の操作ログは、Difyアプリケーションのパフォーマンスを最適化するために役立ちます。 + +![LangSmithでのアプリケーションデータの表示](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/monitoring/integrate-external-ops-tools/beeb4ee50c80de8db7400c1f65727c8c.png) + +### モニタリングデータリスト + +#### ワークフロー/チャットフローのトレース情報 + +ワークフローやチャットフローを追跡するために使用されます。 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ワークフローLangSmith Chain
workflow\_app\_log\_id/workflow\_run\_idID
user\_session\_id- メタデータに配置
workflow\_{id}名前
start\_time開始時間
end\_time終了時間
inputs入力
outputs出力
モデルトークン消費使用メタデータ
metadata追加情報
エラーエラー
\[workflow]タグ
"conversation\_id/none for workflow"メタデータ内のconversation\_id
conversion\_id親実行ID
+**ワークフロートレース情報** + +- workflow\_id:ワークフローの固有識別子 +- conversation\_id:会話ID +- workflow\_run\_id:現在の実行ID +- tenant\_id:テナントID +- elapsed\_time:現在の実行にかかった時間 +- status:実行ステータス +- version:ワークフローのバージョン +- total\_tokens:現在の実行で使用されるトークンの合計数 +- file\_list:処理されたファイルのリスト +- triggered\_from:現在の実行を引き起こしたソース +- workflow\_run\_inputs:現在の実行の入力データ +- workflow\_run\_outputs:現在の実行の出力データ +- error:現在の実行中に発生したエラー +- query:実行中に使用されたクエリ +- workflow\_app\_log\_id:ワークフローアプリケーションログID +- message\_id:関連メッセージID +- start\_time:実行の開始時間 +- end\_time:実行の終了時間 +- workflow node executions:ワークフローノード実行に関する情報 +- メタデータ + - workflow\_id:ワークフローの固有識別子 + - conversation\_id:会話ID + - workflow\_run\_id:現在の実行ID + - tenant\_id:テナントID + - elapsed\_time:現在の実行にかかった時間 + - status:実行ステータス + - version:ワークフローのバージョン + - total\_tokens:現在の実行で使用されるトークンの合計数 + - file\_list:処理されたファイルのリスト + - triggered\_from:現在の実行を引き起こしたソース + +#### メッセージトレース情報 + +大規模言語モデル(LLM)関連の会話を追跡するために使用されます。 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
チャットLangSmith LLM
message\_idID
user\_session\_id- メタデータに配置
“message\_{id}"名前
start\_time開始時間
end\_time終了時間
inputs入力
outputs出力
モデルトークン消費使用メタデータ
metadata追加情報
エラーエラー
\["message", conversation\_mode]タグ
conversation\_idメタデータ内のconversation\_id
conversion\_id親実行ID
+**メッセージトレース情報** + +- message\_id:メッセージID +- message\_data:メッセージデータ +- user\_session\_id:ユーザーセッションID +- conversation\_model:会話モード +- message\_tokens:メッセージ中のトークン数 +- answer\_tokens:回答のトークン数 +- total\_tokens:メッセージと回答の合計トークン数 +- error:エラー情報 +- inputs:入力データ +- outputs:出力データ +- file\_list:処理されたファイルのリスト +- start\_time:開始時間 +- end\_time:終了時間 +- message\_file\_data:メッセージに関連付けられたファイルデータ +- conversation\_mode:会話モード +- メタデータ + - conversation\_id:会話ID + - ls\_provider:モデルプロバイダ + - ls\_model\_name:モデルID + - status:メッセージステータス + - from\_end\_user\_id:送信ユーザーのID + - from\_account\_id:送信アカウントのID + - agent\_based:メッセージがエージェントベースかどうか + - workflow\_run\_id:ワークフロー実行ID + - from\_source:メッセージのソース + +#### モデレーショントレース情報 + +会話のモデレーションを追跡するために使用されます。 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
モデレーションLangSmith Tool
user\_id- メタデータに配置
“moderation"名前
start\_time開始時間
end\_time終了時間
inputs入力
outputs出力
metadata追加情報
\[moderation]タグ
message\_id親実行ID
+**モデレーショントレース情報** + +- message\_id:メッセージID +- user\_id:ユーザーID +- workflow\_app\_log\_id:ワークフローアプリケーションログID +- inputs:モデレーションの入力データ +- message\_data:メッセージデータ +- flagged:コンテンツに注意が必要かどうか +- action:実行された具体的なアクション +- preset\_response:プリセット応答 +- start\_time:モデレーション開始時間 +- end\_time:モデレーション終了時間 +- メタデータ + - message\_id:メッセージID + - action:実行された具体的なアクション + - preset\_response:プリセット応答 + +#### 提案された質問トレース情報 + +提案された質問を追跡するために使用されます。 \ No newline at end of file diff --git a/ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-opik.mdx b/ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-opik.mdx new file mode 100644 index 00000000..e6e39101 --- /dev/null +++ b/ja-jp/guides/monitoring/integrate-external-ops-tools/integrate-opik.mdx @@ -0,0 +1,523 @@ +--- +title: Opikの統合 +--- + + +## Opikの概要 + +Opikは、大規模言語モデル(LLM)アプリケーションを評価、テスト、および監視するためのオープンソースプラットフォームです。LLMベースのアプリケーション開発において、直感的な評価・テスト・監視機能を提供し、開発効率の向上を支援します。 + + +詳細については、[Opik](https://www.comet.com/site/products/opik/)をご参照ください。 + + +--- + +## Opikの導入ガイド + +### 1. [Opik](https://www.comet.com/signup?from=llm) に登録/ログイン + +### 2. Opik APIキーの取得 + +右上のユーザーメニューから**API Key**を選択し、APIキーを取得・コピーしてください。 + +![Opik APIキー](https://assets-docs.dify.ai/2025/01/a66603f01e4ffaa593a8b78fcf3f8204.png) + +### 3. OpikとDifyを統合 + +DifyアプリケーションでOpikを設定します。監視するアプリケーションを開き、サイドメニューで**監視**を選択し、ページ上の**アプリケーションパフォーマンスを追跡**をクリックします。 + +![アプリケーションパフォーマンスを追跡](https://assets-docs.dify.ai/2025/01/9d52a244e3b6cef1874ee838cd976111.png) + +設定後、Opikで作成した**API Key**と**プロジェクト名**を設定ページに貼り付けて保存します。 + +![Opikの設定](https://assets-docs.dify.ai/2025/01/7f4c436e2dc9fe94a3ed49219bb3360c.png) + +保存に成功すると、現在のページで監視ステータスを確認できます。 + +## 監視データの確認 + +設定が完了すると、Difyアプリケーションを通常通りデバッグまたは使用できます。すべての使用履歴はOpikで監視可能です。 + +![Opikでアプリデータを確認](https://assets-docs.dify.ai/2025/01/a1c5aa80325e6d0223d48a178393baec.png) + +Opikに切り替えると、ダッシュボードでDifyアプリケーションの詳細な操作ログを確認できます。 + +![Opikでアプリデータを確認](https://assets-docs.dify.ai/2025/01/09601d45eaf8ed90a4dfb07c34de36ff.png) + +Opikの詳細なLLM操作ログにより、Difyアプリケーションのパフォーマンスを最適化できます。 + +![Opikでアプリデータを確認](https://assets-docs.dify.ai/2025/01/708533b4fc616f852b5601fe602e3ef5.png) + +## モニタリングデータリスト + +### **ワークフロー/会話フロートラッキング情報** + +**ワークフローと会話フローの追跡に使用** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ワークフローOpikトラッキング
workflow_app_log_id/workflow_run_idid
user_session_id- メタデータに配置
workflow\_{id}name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
モデルトークン消費usage_metadata
metadatametadata
errorerror
\[workflow]tags
"conversation_id/none for workflow"conversation_id in metadata
+**ワークフロートラッキング情報** + +- workflow_id - ワークフローの一意識別子 +- conversation_id - 会話ID +- workflow_run_id - 現在の実行ID +- tenant_id - テナントID +- elapsed_time - 現在の実行にかかった時間 +- status - 実行ステータス +- version - ワークフローバージョン +- total_tokens - 現在の実行で使用されたトークン総数 +- file_list - 処理されたファイルのリスト +- triggered_from - 実行をトリガーしたソース +- workflow_run_inputs - 現在の実行の入力データ +- workflow_run_outputs - 現在の実行の出力データ +- error - 実行中に発生したエラー +- query - 実行中に使用されたクエリ +- workflow_app_log_id - ワークフローアプリケーションログID +- message_id - 関連するメッセージID +- start_time - 実行開始時間 +- end_time - 実行終了時間 +- workflow node executions - ワークフローノードの実行情報 +- メタデータ + - workflow_id - ワークフローの一意識別子 + - conversation_id - 会話ID + - workflow_run_id - 現在の実行ID + - tenant_id - テナントID + - elapsed_time - 現在の実行にかかった時間 + - status - 実行ステータス + - version - ワークフローバージョン + - total_tokens - 現在の実行で使用されたトークン総数 + - file_list - 処理されたファイルのリスト + - triggered_from - 実行をトリガーしたソース + +--- + +### **メッセージトラッキング情報** + +**LLM関連の会話を追跡するために使用** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
チャットOpik LLM
message_idid
user_session_id- メタデータに配置
"llm"name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
モデルトークン消費usage_metadata
metadatametadata
\["message", conversation_mode]tags
conversation_idconversation_id in metadata
+**メッセージトラッキング情報** + +- message_id - メッセージID +- message_data - メッセージデータ +- user_session_id - ユーザーセッションID +- conversation_model - 会話モード +- message_tokens - メッセージ内のトークン数 +- answer_tokens - 回答内のトークン数 +- total_tokens - メッセージと回答のトークン総数 +- error - エラー情報 +- inputs - 入力データ +- outputs - 出力データ +- file_list - 処理されたファイルリスト +- start_time - 開始時間 +- end_time - 終了時間 +- message_file_data - メッセージ関連のファイルデータ +- conversation_mode - 会話モード +- メタデータ + - conversation_id - 会話ID + - ls_provider - モデルプロバイダー + - ls_model_name - モデルID + - status - メッセージステータス + - from_end_user_id - 送信ユーザーID + - from_account_id - 送信アカウントID + - agent_based - エージェントベースかどうか + - workflow_run_id - ワークフロー実行ID + - from_source - メッセージソース + +### **レビュー追跡情報** + +**会話のレビューを追跡するために使用** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
レビューOpik Tool
user_id- メタデータに配置
"moderation"name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
metadatametadata
\["moderation"]tags
+**レビュー追跡情報** + +- message_id - メッセージID +- user_id - ユーザーID +- workflow_app_log_id - ワークフローアプリケーションログID +- inputs - レビュー入力データ +- message_data - メッセージデータ +- flagged - 注意が必要とマークされたかどうか +- action - 実施された具体的なアクション +- preset_response - プリセットレスポンス +- start_time - レビュー開始時間 +- end_time - レビュー終了時間 +- メタデータ + - message_id - メッセージID + - action - 実施されたアクション + - preset_response - プリセットレスポンス + +--- + +### **提案質問追跡情報** + +**提案質問を追跡するために使用** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
提案質問Opik LLM
user_id- メタデータに配置
"suggested_question"name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
metadatametadata
\["suggested_question"]tags
+**提案質問追跡情報** + +- message_id - メッセージID +- message_data - メッセージデータ +- inputs - 入力データ +- outputs - 出力データ +- start_time - 開始時間 +- end_time - 終了時間 +- total_tokens - トークン総数 +- status - メッセージステータス +- error - エラー情報 +- from_account_id - 送信アカウントID +- agent_based - エージェントベースかどうか +- from_source - メッセージの送信元 +- model_provider - モデルプロバイダー +- model_id - モデルID +- suggested_question - 提案された質問 +- level - ステータスレベル +- status_message - ステータスメッセージ +- メタデータ + - message_id - メッセージID + - ls_provider - モデルプロバイダー + - ls_model_name - モデルID + - status - メッセージステータス + - from_end_user_id - 送信ユーザーID + - from_account_id - 送信アカウントID + - workflow_run_id - ワークフロー実行ID + - from_source - メッセージの送信元 + +--- + +### **データセット検索追跡情報** + +**ナレッジベース検索を追跡するために使用** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
データセット検索Opik Retriever
user_id- メタデータに配置
"dataset_retrieval"name
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
metadatametadata
\["dataset_retrieval"]tags
message_idparent_run_id
+**データセット検索追跡情報** + +- message_id - メッセージID +- inputs - 入力データ +- documents - ドキュメントデータ +- start_time - 開始時間 +- end_time - 終了時間 +- message_data - メッセージデータ +- メタデータ + - message_id - メッセージID + - ls_provider - モデルプロバイダー + - ls_model_name - モデルID + - status - メッセージステータス + - from_end_user_id - 送信ユーザーID + - from_account_id - 送信アカウントID + - agent_based - エージェントベースかどうか + - workflow_run_id - ワークフロー実行ID + - from_source - メッセージの送信元 + +--- + +### **ツール追跡情報** + +**ツールの呼び出しを追跡するために使用** + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ツールOpik Tool
user_id- メタデータに配置
tool_namename
start_timestart_time
end_timeend_time
inputsinputs
outputsoutputs
metadatametadata
\["tool", tool_name]tags
+**ツール追跡情報** + +- message_id - メッセージID +- tool_name - ツール名 +- start_time - 開始時間 +- end_time - 終了時間 +- tool_inputs - ツール入力 +- tool_outputs - ツール出力 +- message_data - メッセージデータ +- error - エラー情報(該当する場合) +- inputs - メッセージの入力 +- outputs - メッセージの出力 +- tool_config - ツール設定 +- time_cost - 時間コスト +- tool_parameters - ツールパラメーター +- file_url - 関連するファイルのURL +- メタデータ + - message_id - メッセージID + - tool_name - ツール名 + - tool_inputs - ツール入力 + - tool_outputs - ツール出力 + - tool_config - ツール設定 + - time_cost - 時間コスト + - error - エラー情報(該当する場合) + - tool_parameters - ツールパラメーター + - message_file_id - メッセージファイルID + - created_by_role - 作成者の役割 + - created_user_id - 作成者ユーザーID diff --git a/ja-jp/user-guide/build-app/flow-app/additional-feature.mdx b/ja-jp/guides/workflow/additional-feature.mdx similarity index 55% rename from ja-jp/user-guide/build-app/flow-app/additional-feature.mdx rename to ja-jp/guides/workflow/additional-feature.mdx index 74896981..11f75e60 100644 --- a/ja-jp/user-guide/build-app/flow-app/additional-feature.mdx +++ b/ja-jp/guides/workflow/additional-feature.mdx @@ -1,74 +1,63 @@ --- title: 追加機能 -version: '日本語' --- ワークフローとチャットフローアプリは、ユーザーのインタラクション体験を向上させるためにさまざまな機能を追加しています。たとえば、ファイルのアップロード機能を追加したり、LLMアプリに自己紹介セクションを組み込んだり、ウェルカムメッセージを活用することで、ユーザはより充実したインタラクションを楽しむことができます。 アプリの右上隅にある **「機能」** ボタンをクリックすると、追加機能を利用できます。 - + + 追加機能の設定方法の動画ガイド + ### ワークフロー -> ワークフロー アプリにファイル アップロード機能を追加するこの方法は推奨されなくなりました + +この方法でのWorkflowアプリへのファイルアップロード機能の追加はもはや推奨されていません。代わりに、アプリ開発者にはカスタムファイル変数を活用して、Workflowアプリにファイルアップロード機能を追加することが推奨されています。 + -ワークフロータイプのアプリは、**「画像のアップロを選択し、設定を完了ード」**機能のみをサポートしています。この機能を有効にすると、ワークフローアプリの使用ページに画像のアップロードエントリが表示されます。 +ワークフロータイプのアプリは、**「画像のアップロード」**機能のみをサポートしています。この機能を有効にすると、ワークフローアプリの使用ページに画像のアップロードエントリが表示されます。 - + + 画像アップロード機能の追加方法の動画ガイド + **使用方法:** **ユーザー向け:** 画像のアップロード機能が有効化されたアプリの使用ページには、アップロードボタンが表示されます。このボタンをクリックするか、ファイルのリンクを貼り付けることで画像をアップロードし、LLMから画像に関する回答を受け取ることができます。 -**開発者向け:** 画像のアップロード機能を有効化すると、ユーザーがアップロードした画像ファイルは`sys.files`変数に保存されます。次に、LLMノードを追加し、視覚能力を持つ大規模モデルを選択してVISION機能を有効化し、`sys.files`変数を選択することで、LLMがその画像ファイルを読み取れるようになります。 +**開発者向け:** 画像のアップロード機能を有効化すると、使用者がアップロードした画像ファイルは`sys.files`変数に保存されます。次に、LLMノードを追加し、視覚能力を持つ大規模モデルを選択してVISION機能を有効化し、`sys.files`変数を選択することで、LLMがその画像ファイルを読み取れるようになります。 最後に、ENDノードでLLMノードの出力変数を選択し、設定を完了させます。 - - LLM节点中开启视觉分析能力的设置界面 - - ### チャットフロー チャットフロータイプのアプリは、以下の機能をサポートしています: * **冒頭の対話** - AIが自動的に一文を送信し、歓迎メッセージやAIの自己紹介などでユーザーとの距離を縮めます。 + AIが自動的に一文を送信し、歓迎メッセージやAIの自己紹介などで使用者との距離を縮めます。 * **次の質問の提案** 対話が完了した後に、自動的に次の質問の提案を追加することで、対話のトピックの深さと頻度を向上させます。 * **テキストから音声への変換** - テキストボックスに音声再生ボタンを追加し、TTSサービスを利用してテキストを読み上げます。 + テキストボックスに音声再生ボタンを追加し、TTSサービス([モデルプロバイダ](../../getting-started/readme/model-providers.md)が提供)を利用してテキストを読み上げます。 * **ファイルのアップロード** - ドキュメント、画像、音声、映像、その他のファイル形式をサポートしています。この機能を有効にすると、アプリのユーザーは対話の過程でいつでもファイルをアップロードおよび更新できます。最大10個のファイルを同時にアップロードでき、各ファイルのサイズ上限は15MBです。 - - - Chatflow应用中文件上传功能的设置界面 - + ドキュメント、画像、音声、ビデオ、その他のファイル形式をサポートしています。この機能を有効にすると、アプリの使用者は対話の過程でいつでもファイルをアップロードおよび更新できます。最大10個のファイルを同時にアップロードでき、各ファイルのサイズ上限は15MBです。 + ファイルのアップロード機能 * **引用と帰属** - [「知識検索」](node/knowledge-retrieval)ノードと組み合わせることで、LLMが応答した際の参照元ドキュメントと帰属部分を表示します。 - + [「知識検索」](node/knowledge-retrieval.md)ノードと組み合わせることで、LLMが応答した際の参照元ドキュメントと帰属部分を表示します。 * **コンテンツの審査** - 審査APIを利用して適切な単語リストを維持し、LLMが安全なコンテンツを応答および出力できるようにします。詳細については[適切なコンテンツの審査](../application-orchestrate/app-toolkits/moderation-tool)を参照してください。 + 審査APIを利用して適切な単語リストを維持し、LLMが安全なコンテンツを応答および出力できるようにします。詳細については[適切なコンテンツの審査](../application-orchestrate/app-toolkits/moderation-tool.md)を参照してください。 **使用方法:** @@ -78,10 +67,6 @@ version: '日本語' **ユーザー向け:** ファイルのアップロード機能が有効化されたチャットフローアプリでは、対話ボックスの右側に「クリップ」アイコンが表示されます。このアイコンをクリックすることでファイルをアップロードし、LLMと対話できます。 - - Chatflow应用中使用文件上传功能的界面 - - **アプリ開発者向け:** ファイルアップロード機能を有効にすると、ユーザーがアップロードしたファイルは `sys.files` 変数に保存されます。この変数は、ユーザーが同じ会話ラウンドで新しいメッセージを送信した後に更新されます。 @@ -90,18 +75,14 @@ version: '日本語' * **ドキュメントファイル** -LLMは直接ドキュメントファイルを読み取る機能を持っていないため、[ドキュメント抽出機](node/doc-extractor) ノードを使用して `sys.files` 変数内のファイルを前処理する必要があります。設定手順は以下の通りです: +LLMは直接ドキュメントファイルを読み取る機能を持っていないため、[ドキュメント抽出機](node/doc-extractor.md) ノードを使用して `sys.files` 変数内のファイルを前処理する必要があります。設定手順は以下の通りです: 1. Features 機能を有効にし、ファイルタイプで "ドキュメント" のみを選択します。 -2. [ドキュメント抽出機](node/doc-extractor) ノードの入力変数で `sys.files` 変数を選択します。 +2. [ドキュメント抽出機](node/doc-extractor.md) ノードの入力変数で `sys.files` 変数を選択します。 3. LLM ノードを追加し、システムプロンプトでドキュメント抽出機ノードの出力変数を選択します。 -4. 最後に "直接返信" ノードを追加し、LLM ノードの出力変数を記入します。 +4. 最後に "回答" ノードを追加し、LLM ノードの出力変数を記入します。 -この方法で構築された チャットフロー アプリは、アップロードされたファイルの内容を記憶しません。アプリのユーザーは毎回チャットボックスでドキュメントファイルをアップロードする必要があります。アプリがアップロードされたファイルを記憶する場合は、[「ファイルアップロード:開始ノードに変数を追加」](./file-upload#1-2)を参照してください。 - - - 处理文档文件的工作流编排示意图 - +この方法で構築された チャットフロー アプリは、アップロードされたファイルの内容を記憶しません。アプリの使用者は毎回チャットボックスでドキュメントファイルをアップロードする必要があります。アプリがアップロードされたファイルを記憶する場合は、[「ファイルアップロード:開始ノードに変数を追加」](file-upload.md#fang-fa-er-zai-tian-jia-wen-jian-bian-liang)を参照してください。 * **画像ファイル** @@ -111,24 +92,19 @@ LLMは直接ドキュメントファイルを読み取る機能を持ってい 1. Features 機能を有効にし、ファイルタイプで "画像" のみを選択します。 2. LLM ノードを追加し、VISION 機能を有効にして `sys.files` 変数を選択します。 -3. 最後に "直接返信" ノードを追加し、LLM ノードの出力変数を記入します。 - - - LLM节点中开启视觉分析能力的设置界面 - +3. 最後に "回答" ノードを追加し、LLM ノードの出力変数を記入します。 * **複合ファイルタイプ** -ドキュメントファイルと画像ファイルを同時に処理したい場合は、[リスト操作](node/list-operator) ノードを使用して `sys.files` 変数内のファイルを前処理し、より詳細な変数を抽出して対応する処理ノードに送信する必要があります。設定手順は以下の通りです: +ドキュメントファイルと画像ファイルを同時に処理したい場合は、[リスト操作](node/list-operator.md) ノードを使用して `sys.files` 変数内のファイルを前処理し、より詳細な変数を抽出して対応する処理ノードに送信する必要があります。設定手順は以下の通りです: 1. Features 機能を有効にし、ファイルタイプで "画像" および "ドキュメントファイル" を選択します。 2. 二つのリスト操作ノードを追加し、"フィルタリング" 条件で画像とドキュメント変数を抽出します。 3. ドキュメントファイル変数を抽出し、"ドキュメント抽出機" ノードに渡し、画像ファイル変数を抽出し、LLM ノードに渡します。 -4. 最後に "直接返信" ノードを追加し、LLM ノードの出力変数を記入します。 +4. 最後に "回答" ノードを追加し、LLM ノードの出力変数を記入します。 -アプリユーザーが文書ファイルと画像を同時にアップロードした場合、文書ファイルは自動的に文書抽出機ノードに送られ、画像ファイルはLLMノードに送られて、ファイルを共同で処理することができます。 +アプリ使用者が文書ファイルと画像を同時にアップロードした場合、文書ファイルは自動的に文書抽出機ノードに送られ、画像ファイルはLLMノードに送られて、ファイルを共同で処理することができます。 * **音声・動画ファイル** -LLMは音声・動画ファイルを直接読み取る機能をサポートしておらず、Difyプラットフォームにも関連するファイル処理ツールは組み込まれていません。アプリ開発者は[外部データツール](../extension/api-based-extension/external-data-tool)を参照して、ファイル情報を自分で処理することができます。 - +LLMは音声・動画ファイルを直接読み取る機能をサポートしておらず、Difyプラットフォームにも関連するファイル処理ツールは組み込まれていません。アプリ開発者は[外部データツール](../extension/api-based-extension/external-data-tool.md)を参照して、ファイル情報を自分で処理することができます。 diff --git a/ja-jp/user-guide/build-app/flow-app/application-publishing.mdx b/ja-jp/guides/workflow/application-publishing.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/application-publishing.mdx rename to ja-jp/guides/workflow/application-publishing.mdx diff --git a/ja-jp/guides/workflow/bulletin.mdx b/ja-jp/guides/workflow/bulletin.mdx new file mode 100644 index 00000000..0526d4dd --- /dev/null +++ b/ja-jp/guides/workflow/bulletin.mdx @@ -0,0 +1,131 @@ +--- +title: 変更のお知らせ:画像アップロードがファイルアップロードに統合されました +description: 作者:Evanchen , Allen. +--- + +今後、画像ファイルのアップロード機能がより全面的な「ファイルアップロード」機能に統合されます。重複な機能を避けるため、チャットフローとワークフローの「機能」をアップグレードし、調整しました: + +* チャットフローの「機能」から画像アップロードオプションを削除し、新たに「ファイルアップロード」機能を追加します。この機能は、画像ファイルタイプを選択できます。また、アプリのダイアログボックス内の画像アップロードアイコンもファイルアップロードアイコンに変更されました。 + +チャットフローの機能メニュー + +* **ワークフローの機能および`sys.files`[変数](./variables.md)にあった画像アップロードオプションは、将来的に廃止されます。** 両方とも`LEGACY`としてマークされ、開発者にはワークフローにファイルアップロード機能を追加するためにカスタムファイル変数の使用が推奨されています。 + +ワークフローのLEGACY機能表示 + +### 「画像アップロード」機能を統合する理由 + +以前、Difyは画像ファイルのアップロードのみをサポートしていましたが、最新バージョンでは文書、画像、音声、映像、カスタムファイル形式をサポートする包括的なファイルアップロード機能が導入されました。画像アップロードは、より包括的な「ファイルアップロード」機能に統合されました。 ファイルアップロード機能を追加する際、開発者は「画像」ファイルタイプを選択するだけで画像のアップロードを有効にできます。 + +冗長な機能による混乱を避けるため、チャットフローにおける単独の画像アップロード機能を包括的なファイルアップロード機能に置き換え、ワークフローにおいて画像アップロードを推奨しないことに決定しました。 + +### より全面的な機能:ファイルアップロード + +アプリの情報処理能力を向上させるために、このアップデートで「ファイルアップロード」機能が導入されました。チャットテキストとは異なり、文書ファイルは学術レポートや法的契約など多くの情報を運搬することができます。 + +* ファイルアップロード機能により、ファイルはワークフロー内でファイル変数としてアップロード、解析、参照、ダウンロードされます。 +* 開発者は、画像、音声、映像を含む複雑なタスクの理解と処理が可能なアプリを簡単に構築できます。 + +ファイルアップロード機能の設定画面 + +単独の「画像アップロード」機能の使用を推奨せず、アプリ体験を向上させるために包括的な「ファイルアップロード」機能への移行をお勧めします。 + +### あなたがするべきことは? + +#### Dify Cloudユーザーの場合: + +* **チャットフロー** + +すでに「画像アップロード」機能が有効になっているチャットフローを作成した場合、LLMノードでビジョン機能を有効にすると、システムは機能を自動的に切り替え、アプリの画像アップロード機能に影響を与えません。アプリを更新して再公開する必要がある場合は、LLMノードのビジョン変数選択ボックスでファイル変数を選択し、チェックリストからアイテムをクリアしてアプリを再公開してください。 + +LLMノードのビジョン機能設定 + +チャットフローに「画像アップロード」機能を追加したい場合は、機能で「ファイルアップロード」を有効にし、「画像」ファイルタイプのみを選択してください。その後、LLMノードでビジョン機能を有効にし、sys.files変数を指定してください。アップロードエントリは「ペーパークリップ」アイコンとして表示されます。詳細な手順については、追加機能を参照してください。 + +ファイルアップロードの設定方法 + +* **ワークフロー** + +すでに「画像アップロード」機能が有効になっているワークフローを作成し、LLMノードでビジョン機能を有効にした場合、この変更は直ちには影響しませんが、公式の廃止前に手動で移行を完了する必要があります。 + +ワークフローに「画像アップロード」機能を有効にしたい場合は、[開始](./node/start.md)ノードにファイル変数を追加してください。その後、`sys.files`変数を使用せずに後続ノードでこのファイル変数を参照してください。 + +#### Dify Community Editionまたは自己ホストのエンタープライズユーザーの場合: + +バージョンv0.10.0にアップグレードすると、「ファイルアップロード」機能が表示されます。 + +* チャットフロー: + +「画像アップロード」機能が有効になっているチャットフローは、変更を加えることなくファイルアップロード機能に自動的に切り替わります。 + +チャットフローに「画像アップロード」機能を追加したい場合は、詳細な手順については追加機能セクションを参照してください。 + +* ワークフロー: + +既存のワークフローには影響がありませんが、公式の廃止前に手動で移行を完了する必要があります。 + +### よくある質問: + +#### 1. このアップデートは既存のアプリに影響しますか? + +* 既存のチャットフローは自動的に移行され、画像のアップロード機能はファイルのアップロード機能にスムーズに切り替わります。`sys.files`変数は引き続きデフォルトのVision入力として使用されます。アプリインターフェース内の画像アップロードエントリは、ファイルアップロードエントリに置き換えられます。 +* 現時点では既存のワークフローには影響はありません。`sys.files`変数および画像アップロード機能は「LEGACY」としてマークされていますが、引き続き使用可能です。ただし、これらの「LEGACY」機能は将来的に廃止される予定で、その際には手動でのアップデートが必要になります。 + +#### 2. アプリをすぐにアップデートする必要がありますか? + +* チャットフローはシステムが自動的に移行するため、手動でのアップデートは必要ありません。 +* ワークフローについては、すぐにアップデートする必要はありませんが、将来の移行に備えて新しいファイルアップロード機能に慣れておくことをお勧めします。 + +#### 3. 新しいファイルアップロード機能と互換性のあるアプリを確認する方法は? + +チャットフローの場合: + +• 機能構成でファイルのアップロードオプションが有効になっているか確認してください。 + +• Vision機能を備えたLLMを使用していることを確認し、Visionトグルをオンにしてください。 + +• Visionボックスで、`sys.files`が入力アイテムとして正しく選択されていることを確認してください。 + +ワークフローの場合: + +• 「開始」ノードでファイルタイプの変数を作成してください。 + +• 後続のノードでは、このファイル変数を参照し、LEGACYの`sys.files`変数は使用しないでください。 + +#### 4. 以前公開された Chatflow アプリケーションで画像アップロードアイコンが消えた場合、どうすればよいですか? + +アプリケーションを再公開することをお勧めします。チャットボックスにファイルアップロードアイコンが表示されます。 + +#### 皆様のフィードバックを大切にしています + +Difyコミュニティの重要なメンバーとして、皆様の経験とフィードバックは私たちにとって非常に重要です。ぜひ以下の方法でご意見をお寄せください: + +• 新しいファイルアップロード機能をお試しいただき、その利便性と柔軟性を体験してください。 + +• 次のチャンネルを通じてお考えやご提案を共有してください: + +• [GitHub discussions](https://github.com/langgenius/dify) + +• [Discordチャンネル](https://discord.gg/X8r5WgWzJV) + +皆様のフィードバックは製品の継続的な改善と、コミュニティ全体により良い体験を提供するために役立ちます。 diff --git a/ja-jp/user-guide/build-app/flow-app/concepts.mdx b/ja-jp/guides/workflow/concepts.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/concepts.mdx rename to ja-jp/guides/workflow/concepts.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/create-flow-app.mdx b/ja-jp/guides/workflow/create-flow-app.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/create-flow-app.mdx rename to ja-jp/guides/workflow/create-flow-app.mdx diff --git a/ja-jp/guides/workflow/debug-and-preview/README.mdx b/ja-jp/guides/workflow/debug-and-preview/README.mdx new file mode 100644 index 00000000..aae56e92 --- /dev/null +++ b/ja-jp/guides/workflow/debug-and-preview/README.mdx @@ -0,0 +1,3 @@ +--- +title: デバッグプレビュー +--- diff --git a/ja-jp/guides/workflow/debug-and-preview/checklist.mdx b/ja-jp/guides/workflow/debug-and-preview/checklist.mdx new file mode 100644 index 00000000..11e95b51 --- /dev/null +++ b/ja-jp/guides/workflow/debug-and-preview/checklist.mdx @@ -0,0 +1,9 @@ +--- +title: チェックリスト +--- + + +調整動作に入る前に、未完了の設定や接続されていないノードがないかチェックリストで確認できます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/debug-and-preview/16201deaa47a518ff73c983a33ab4002.png) + diff --git a/ja-jp/guides/workflow/debug-and-preview/history.mdx b/ja-jp/guides/workflow/debug-and-preview/history.mdx new file mode 100644 index 00000000..52141571 --- /dev/null +++ b/ja-jp/guides/workflow/debug-and-preview/history.mdx @@ -0,0 +1,8 @@ +--- +title: 実行履歴 +--- + + +「実行履歴」では、現在のワークフローのデバッグ履歴の実行結果およびログ情報を確認できます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/debug-and-preview/5ff3e82563c43b41e359c83483fd0f9b.png) \ No newline at end of file diff --git a/ja-jp/guides/workflow/debug-and-preview/log.mdx b/ja-jp/guides/workflow/debug-and-preview/log.mdx new file mode 100644 index 00000000..791d7c4c --- /dev/null +++ b/ja-jp/guides/workflow/debug-and-preview/log.mdx @@ -0,0 +1,12 @@ +--- +title: 対話/実行ログ +--- + + +「ログを表示—詳細」をクリックすると、詳細情報、入力/出力、メタデータ情報などの実行概要を見ることができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/debug-and-preview/9b88af156ab35bb5b05b00ffc3e84dc7.png) + +「ログを表示-追跡」をクリックすると、ワークフローの各ノードの入力/出力、トークン消費、実行時間などの完全な実行過程を見ることができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/debug-and-preview/b1e0a84aabbafd96897d277d787019de.png) \ No newline at end of file diff --git a/ja-jp/guides/workflow/debug-and-preview/step-run.mdx b/ja-jp/guides/workflow/debug-and-preview/step-run.mdx new file mode 100644 index 00000000..88fc1a1e --- /dev/null +++ b/ja-jp/guides/workflow/debug-and-preview/step-run.mdx @@ -0,0 +1,12 @@ +--- +title: ステップ実行 +--- + + +ワークフローはノードのステップの実行をサポートしており、ステップを実行中に現在のノードの実行が期待通りかどうかを繰り返しテストすることができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/debug-and-preview/36e547165a5088510c99baee4ce42bcd.png) + +ステップテスト実行後、実行ステータス、入力/出力、メタデータ情報を確認することができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/debug-and-preview/040e1051d33b94d35e4683d3c89691a8.png) \ No newline at end of file diff --git a/ja-jp/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.mdx b/ja-jp/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.mdx new file mode 100644 index 00000000..f47da800 --- /dev/null +++ b/ja-jp/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.mdx @@ -0,0 +1,16 @@ +--- +title: プレビューと実行 +--- + + +Difyワークフローでは、完全な実行とデバッグ機能を提供しています。対話型アプリケーションでは、クリック「プレビュー」でデバッグモードに入ります。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/debug-and-preview/91379dc42d0d815e52ddad0cc5450a46.png) + +ワークフローアプリケーションでは、クリック「実行」でデバッグモードに入ります。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/debug-and-preview/b92d7536392b1e1f2423d0e3aa113915.png) + +デバッグモードに入ると、インターフェースの右側で設定済みのワークフローをデバッグできます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/debug-and-preview/4c81791508592e0f8019b8ebf8f119ea.png) \ No newline at end of file diff --git a/ja-jp/guides/workflow/error-handling/README.mdx b/ja-jp/guides/workflow/error-handling/README.mdx new file mode 100644 index 00000000..93fd0efe --- /dev/null +++ b/ja-jp/guides/workflow/error-handling/README.mdx @@ -0,0 +1,131 @@ +--- +description: 著者:アレン、エヴァン +--- + +--- +title: エラー処理 +--- + + +ワークフローアプリは通常、複数の連携する部分(ノード)から成り立っています。どこか一つの部分で発生したエラー(APIリクエストの失敗やLLM出力の問題など)が全体の動作を止めてしまうことがあり、その場合、開発者は故障箇所を見つけて修正するのに大きな労力を費やす必要があります。これは、特に複雑なワークフローの場合には一層の課題となります。 + +エラー処理メカニズムによって、部分的な問題を様々な方法でうまく対処できるようになります。これにより、一部で問題が発生しても、全体の処理を止めることなくエラー情報を記録したり、別の方法でタスクを完了させることが可能です。アプリケーションの重要な部分にこのメカニズムを取り入れることで、全体の柔軟性と強靭さが大きく向上します。 + +開発者は、複雑なエラー対応コードを各ノードに書き込む必要がなくなります。また、エラー処理用の追加部分を設けることもなくなります。このメカニズムは、ワークフローの設計をシンプルにし、様々な戦略を用いて実行ロジックを整理します。 + +## 実践シナリオ + +1. **ネットワークエラーの対応** 例えば、あるワークフローが天気情報、ニュース要約、ソーシャルメディア分析の3つのAPIサービスからデータを取得し、それらを統合する必要があるとします。リクエスト制限により一部のサービスが応答しない場合があります。エラー処理を利用して、メインの処理は他のデータソースの処理を続けつつ、失敗したAPI呼び出しの詳細を記録します。これにより開発者は後からこれらの情報を分析し、サービス呼び出しの戦略を改善することができます。 +2. **ワークフローの代替ルート** たとえば、あるLLMノードが詳細なドキュメントの要約を行う際に、入力が長すぎてトークン制限を超えてしまい、エラーが発生することがあります。エラー処理メカニズムを設定することで、このような状況に遭遇した際には、内容を小分けにして処理を続ける代替ルートを自動的に選択でき、処理の中断を避けることができます。 +3. **エラー情報の明確化** 実行中にあいまいなエラーメッセージ(「呼び出し失敗」など)が返されると、問題の特定が難しくなることがあります。エラー処理メカニズムにより、開発者はエラーメッセージを事前に定義することができ、デバッグ時により明確で正確な情報を提供することが可能になります。 + +## エラー処理メカニズムの活用 + +以下の4つのノードタイプにエラー処理メカニズムを組み込むことで、より詳細なドキュメントを参照し、アプリを強化することができます: + +* [LLM](../node/llm.md) +* [HTTP](../node/http-request.md) +* [コード](../node/code.md) +* [ツール](../node/tools.md) + +**失敗後の再試行** + +特定の例外状況では、ノードの再試行操作によって問題を解決できる場合があります。このような場合には、ノードの「失敗後の再試行」機能を有効化し、再試行の最大回数と間隔を指定することが可能です。 + +![](https://assets-docs.dify.ai/2024/12/18097e4c94b67a79150b967fc50f9f43.png) + +もしノードの再試行を行ってもエラーが続く場合、エラー処理機能が予め定められた対策に従って次の手順を進めます。 + +**エラー処理** + +異常処理の仕組みには、次の3つの選択肢があります: + +* **処理なし**:エラーを処理せずに、ノードからのエラーメッセージをそのまま返し、フロー全体を停止します。 +* **デフォルト値**:開発者がエラー情報をあらかじめ定義できるようにします。エラーが発生した場合、定義された値によって元のノードが返すエラー情報を置き換えます。 +* **エラーブランチ**:エラーが発生した場合、あらかじめ設定されたエラー処理のブランチを実行します。 + +各処理方法の詳細と設定方法については、[事前定義されたエラー処理ロジック](predefined-nodes-failure-logic.md)をご覧ください。 + +![](https://assets-docs.dify.ai/2024/12/6e2655949889d4d162945d840d698649.png) + +## スタートアップガイド + +**シナリオ:ワークフローアプリにエラー処理メカニズムを追加する** + +以下は、ワークフローアプリで異常処理メカニズムを設定し、ノードの異常に備えて代替処理を行う方法の簡単な例です。 + +![](https://assets-docs.dify.ai/2024/12/958326384d3b60a98246e9ff565c7ed3.png) + +**アプリのロジック**:LLMノードは入力された指示に従って、正しい形式や間違った形式のJSONコードを生成します。その後、Aコードノードがこのコードを実行し結果を出力します。Aコードノードが誤った形式のJSONコンテンツを受け取った場合には、設定された異常処理メカニズムに基づいて、代替パスを実行しメインプロセスを継続します。 + +**1. JSONコード生成ノードの設定** + +新しいワークフローアプリを作成し、LLMノードとコードノードを追加します。プロンプトを使ってLLMが指示に従い、正しい形式や間違った形式のJSONコンテンツを生成し、それがAコードノードで検証されます。 + +**LLMノードでのプロンプト例:** + +``` +You are a teaching assistant. According to the user's requirements, you only output a correct or incorrect sample code in json format. +``` + +**コードノードでのJSON検証コード:** + +``` +def main(json_str: str) -> dict: + obj = json.loads(json_str) + return {'result': obj} +``` + +**2. Aコードノードにエラー処理機能を追加する** + +Aコードノードは、JSONコンテンツを検証する役割を持ち、受け取ったJSONコンテンツがフォーマットに合わない場合には、エラー処理機能を通じて代替の手順を踏み、エラーを修正するために次のLLMノードに渡します。その後、JSONを再度検証し、メインの処理フローを再開します。Aコードノードの「エラー処理」タブから「エラー分岐」の設定を行い、新たなLLMノードを設定しましょう。 + +**3. Aコードノードが出力するエラーの内容を修正** + +新設したLLMノードでは、プロンプトに指示を記述し、変数を用いてAコードノードが出力したエラー内容を参照・修正します。次に、Bコードノードを追加し、JSONコンテンツの再検証を行います。 + +**4. プロセスの完了** + +変数を集約するノードを追加して、正常な処理結果とエラー処理結果をまとめ、終了ノードで出力します。これでフロー全体のプロセスが完了します。 + +![](https://assets-docs.dify.ai/2024/12/059b5a814514cd9abe10f1f4077ed17f.png) + +> デモ用のDSLファイルはこちらから[ダウンロード](https://assets-docs.dify.ai/2024/12/087861aa20e06bb4f8a2bef7e7ae0522.yml)できます。 + +## ステータス説明 + +この文書では、ノードとプロセスの状態について説明します。状態を明確にすることで、開発者は現在のワークフローアプリケーションの動作状況を理解しやすくなり、問題解決や迅速な意思決定に役立ちます。エラー処理機能を導入したことにより、ノードとプロセスの状態には以下のような分類があります: + +**ノードの状態** + +* **成功** ノードが正常に動作して正確な情報を出力しました。 +* **失敗** エラー処理が行われず、ノードの動作が失敗し、エラー情報が出力されました。 +* **エラー** ノードでエラーが発生しましたが、処理が継続されました。 + +**ワークフローの状態** + +* **成功** プロセスの全ノードが正常に動作し、終了ノードが適切に情報を出力し、ステータスが成功として設定されました。 +* **失敗** ノードでエラーが発生し、プロセス全体が停止し、ステータスが失敗として設定されました。 +* **部分成功** ノードでエラーが発生しましたが、エラー処理機能によってプロセス全体が最終的に正常に動作しました。ステータスは部分成功として設定されました。 + +## よくある質問 + +**1. エラー処理機構を導入することで何が変わりますか?** + +**エラー処理機構がない場合:** + +* **作業フローが中断する**:外部のサービス呼び出しに失敗する、ネットワークに問題がある、ツールにエラーが存在するなどの理由で、一つのエラーが発生すると、作業フローが即座に停止します。開発者はエラーを手動で探し出し、修正後に作業フローを再開する必要があります。 +* **対応策の制約**:開発者は異なるエラータイプや事象に応じて特別な対応策を講じることができません。たとえば、エラーが発生した場合にもフローを継続する、または別の処理へ切り替えるといったことができなくなります。 +* **冗長なノードの手動追加が必要**:エラーがフロー全体に影響を与えないようにするためには、エラーを捉えて処理するために多くの追加ノードを設計する必要があり、これが作業フローの複雑さや開発コストを増加させます。 +* **限られたログ情報**:エラーログは通常、内容が簡素であったり、必要な情報が不足していたりします。これでは問題の迅速な診断が難しくなります。 + +**エラー処理機構を導入した後:** + +* **フローが中断されることがない**:あるノードでエラーが発生しても、事前に定めたルールに従って作業フローを継続することができ、一点の障害が全体に影響することが少なくなります。 +* **エラー処理を柔軟にカスタマイズ可能**:開発者はそれぞれのノードごとにエラー対応の戦略を設定できます。例えば、フローを継続したり、ログを残したり、別のルートに切り替えたりすることができます。 +* **作業フローの設計をシンプルに**:一般的なエラー処理機構により、開発者が冗長なノードを手動で設計する必要が減り、作業フローがよりシンプルで明瞭になります。 +* **詳細なエラーログを提供**:カスタマイズ可能なエラー情報の整理メカニズムを提供し、開発者が問題を迅速に特定し、フローを最適化するのに役立ちます。 + +**2. 代替ルートの実行状況をどうやってデバッグしますか?** + +作業フローの実行ログをチェックすることで、条件分岐やルート選択の状況を確認できます。エラー処理のブランチは黄色でハイライトされ、開発者が計画通りに代替ルートが実行されているかどうかを簡単に確認できます。 diff --git a/ja-jp/guides/workflow/error-handling/error-type.mdx b/ja-jp/guides/workflow/error-handling/error-type.mdx new file mode 100644 index 00000000..729275eb --- /dev/null +++ b/ja-jp/guides/workflow/error-handling/error-type.mdx @@ -0,0 +1,100 @@ +--- +title: エラータイプの概要 +--- + + +本記事では、さまざまなノードで発生可能なトラブルと、それに伴うエラーの種類について解説します。 + +## チャットフロー/ワークフロー + +* **システムエラー** + システム関連の問題が原因で発生するエラーです。例えば、サービスが正しく起動していない、ネットワーク接続に問題がある場合などが該当します。 + +* **操作エラー** + 開発者がノードの設定や操作に失敗した際に生じるエラーです。 + +## コードノード +[コードノード](../node/code.md)を使用することで、PythonやJavaScriptのコードを実行し、データ変換を行うことができます。ここでは、よくある4つのエラーを紹介します: + +1. **コードエラー(CodeNodeError)** + 開発者のコード内で例外が発生した場合にこのエラーが起きます。変数が不足している、計算ロジックが間違っている、文字列として扱うべき配列を誤って変数として扱っている場合などがあります。エラーメッセージや具体的な行番号で問題を特定できます。 + + ![コードエラー](https://assets-docs.dify.ai/2024/12/c86b11af7f92368180ea1bac38d77083.png) + +2. **サンドボックスのネットワーク問題(System Error)** + ネットワークのトラフィック異常や接続問題によって生じるエラーです。サンドボックスサービスが停止している、プロキシがネットワークをブロックしている場合などです。この問題は次の手順で解決可能です: + a. ネットワークの品質を確認する + b. サンドボックスサービスを再起動する + c. プロキシ設定を見直す + + ![サンドボックスのネットワーク問題](https://assets-docs.dify.ai/2024/12/d95007adf67c4f232e46ec455c348e2c.PNG) + +3. **ネスト制限エラー(DepthLimitError)** + 現在のノードは、最大で5層までのネスト構造をサポートしています。これを超えるとエラーが発生します。 + + ![DepthLimitError](https://assets-docs.dify.ai/2024/12/5649d52a6e80ddd4180b336266701f7b.png) + +4. **出力検証エラー(OutputValidationError)** + 選択した出力変数の型と実際の出力変数の型が一致しない場合に生じるエラーです。開発者は適切な出力変数の型を選択し直すことで、この問題を回避することができます。 + + ![OutputValidationError](https://assets-docs.dify.ai/2024/12/ab8cae01a590b037017dfe9ea4dbbb8b.png) + +## LLMノード + +[LLMノード](../node/llm.md)は、チャットフローやワークフローの中核をなすコンポーネントであり、大規模言語モデルを用いて様々なタスクを処理します。 + +以下は、実行時に遭遇する可能性のある6つの一般的なエラーです: + +1. **変数が見つからない(VariableNotFoundError)** + システムプロンプトやコンテキストで指定された変数がLLMによって見つけられない場合にこのエラーが発生します。開発者は、補足となる変数を設定することで問題を解決できます。 + + ![VariableNotFoundError](https://assets-docs.dify.ai/2024/12/f20c5fbde345144de6183374ab277662.png) + +2. **コンテキスト構造の無効 (InvalidContextStructureError)** + LLMノードが不正なデータ構造を受け取った場合に報告されます。コンテキストは文字列データ構造のみをサポートします。 + +3. **無効な変数タイプ(InvalidVariableTypeError)** + システムプロンプトの形式が一般的なテキストやJinja syntaxでない場合にこのエラーが生じます。 + +4. **モデルが存在しない(ModelNotExistError)** + 各LLMノードにはモデルの指定が必要です。モデルが選択されていない場合には、このエラーが発生します。 + +5. **LLMの認証が必要(LLMModeRequiredError)** + 選択されたモデルにAPIキーが設定されていない場合にこのエラーが報告されます。ドキュメントの指示に従ってモデルを認証してください。 + +6. **プロンプトが見つからない(NoPromptFoundError)** + LLMノードのプロンプトが空の場合、エラーが生じます。 + +## HTTPノード + +[HTTPノード](../node/http-request.md)は、HTTPリクエストを送信してデータを取得、Webhookを発火、画像を生成、ファイルをダウンロードするなどの操作を可能にし、カスタマイズ可能なリクエストによって外部サービスとのシームレスな統合を実現します。ここでは、このノードで頻繁に発生する5つの一般的なエラーを紹介します: + +1. **認証設定エラー(AuthorizationConfigError)** + 認証情報が設定されていない場合に発生するエラーです。 + +2. **ファイル取得エラー(FileFetchError)** + ファイル変数が取得できない場合に発生するエラーです。 + +3. **不正なHTTPリクエストメソッド(InvalidHttpMethodError)** + リクエストメソッドがGET、HEAD、POST、PUT、PATCH、DELETEのいずれにも該当しない場合にエラーが発生します。 + +4. **レスポンスサイズ超過(ResponseSizeError)** + HTTPレスポンスが10MBの制限を超えると、このエラーが発生します。 + +5. **HTTPレスポンスコードエラー(HTTPResponseCodeError)** + レスポンスコードが200系以外(例:400、404、500など)の場合にエラーが報告されます。例外処理が有効であれば、これらのステータスコードによるエラーが報告されますが、それ以外ではエラーは報告されません。 + +## ツールノード + +ランタイムでよく遭遇する3つのエラーは以下のとおりです: + +1. **ツール実行エラー(ToolNodeError)** + ツール自体の実行に問題があった場合に報告されるエラーです。たとえば、目指すAPIのリクエスト制限に達した場合などがこれに該当します。 + + ![](https://assets-docs.dify.ai/2024/12/84af0831b7cb23e64159dfbba80e9b28.jpg) + +2. **ツールパラメータエラー(ToolParameterError)** + ツールノードの設定パラメータに問題がある場合、つまりツールノードが要求するパラメータと異なる値が入力された場合にこのエラーが発生します。 + +3. **ツールファイル処理エラー(ToolFileError)** + ツールノードの処理に必要なファイルが見つからない場合にこのエラーが発生します。 \ No newline at end of file diff --git a/ja-jp/guides/workflow/error-handling/predefined-nodes-failure-logic.mdx b/ja-jp/guides/workflow/error-handling/predefined-nodes-failure-logic.mdx new file mode 100644 index 00000000..4ff53282 --- /dev/null +++ b/ja-jp/guides/workflow/error-handling/predefined-nodes-failure-logic.mdx @@ -0,0 +1,70 @@ +--- +title: 事前定義されたエラー処理ロジック +--- + + +以下の4つのノードは、エラー状況に対応するためのロジックを構築する機能を提供しています: + +* [LLM](../node/llm.md) +* [HTTP](../node/http-request.md) +* [コード](../node/code.md) +* [ツール](../node/tools.md) + +エラー処理のためには、次の3つの事前に定義されたロジックオプションがあります: + +* **処理なし**:エラーを処理せずに、ノードからのエラーメッセージをそのまま返し、フロー全体を停止します。 +* **デフォルト値**:開発者がエラー情報をあらかじめ定義できるようにします。エラーが発生した場合、定義された値によって元のノードが返すエラー情報を置き換えます。 +* **エラーブランチ**:エラーが発生した場合、あらかじめ設定されたエラー処理のブランチを実行します。 + +![エラー処理](https://assets-docs.dify.ai/2024/12/6e2655949889d4d162945d840d698649.png) + +### 処理ロジック:処理なし + +これはノードのエラー処理のデフォルト設定であり、タイムアウトやエラーが発生した場合には直接エラーメッセージをスローし、全体の処理フローを中断します。この場合、ワークフローアプリケーションは実行失敗として記録されます。 + +### 処理ロジック:デフォルト値 + +開発者はデフォルト値エディタを使用して、ノードのエラー出力情報をカスタマイズできます。これは、プログラミングにおけるステップバイステップ(逐次的な)デバッグに似ており、アプリケーションのデバッグプロセスをより明確にします。 + +例えば: +* `object`や`array`型には、直観的な`JSON`エディタを用意しています。 +* `number`や`string`型には、それぞれの型に合わせたエディタを用意しています。 + +ノードの実行が失敗した場合、フローは自動的に開発者が設定したデフォルト値を使用し、オリジナルのエラー出力情報の代わりとして処理を続行します。これにより、より明確なエラーメッセージが得られ、開発者はアプリケーションのフローデザインの最適化に注力できます。 + +> デフォルト値のデータ構造は、ノードの出力変数と一致します。例えば、コードノードの出力変数をarray[number]データタイプに設定した場合、デフォルト値のデータタイプも同様にarray[number]になります。 + +![エラー処理:デフォルト値](https://assets-docs.dify.ai/2024/12/e9e5e757090679243e0c9976093c7e6c.png) + +### 処理ロジック:エラーブランチ + +現在のノードの実行でエラーが発生した場合、予め設定されたエラーブランチがトリガーされます。この選択を行うと、新たな接続点が現在のノードに追加され、開発者はキャンバス上で次の処理フローを構築するか、ノード詳細の右下隅で下流ノードを追加することができます。 + +> エラーブランチはオレンジ色の線で示されます。 + +![](https://assets-docs.dify.ai/2024/12/e5ea1af947818bd9e27cab3042c1c4f3.png) + +一般的な戦略としては、エラーブランチ内でエラーに対応するノードを配置し、修正されたデータを変数集約ノードを介して元のフローにリンクし、結果を集約して出力します。例えば、メールツールノードを接続してエラー情報を送信することができます。 + +**エラー変数** + +ノードのエラー処理を「デフォルト値」または「エラーブランチ」に設定した場合、エラー状況が発生すると、`error_type`および`error_message`といった変数を通じて下流ノードにエラー情報が伝えられます。 + + + + + + + + + + + + + + + + + + +
変数名説明
error_typeエラーのタイプ。ノードの種類によって異なるエラータイプがあり、開発者はそれぞれのエラーに対して適切な対処法を選択できます。
error_message具体的なエラーメッセージ。これはエラー発生元のノードが出力する詳細な障害情報であり、開発者はこれを利用してエラーを修正したり、メールツールを通じて情報を送信したりできます。
\ No newline at end of file diff --git a/ja-jp/guides/workflow/error-handling/saretaerrojikku.mdx b/ja-jp/guides/workflow/error-handling/saretaerrojikku.mdx new file mode 100644 index 00000000..5ce61498 --- /dev/null +++ b/ja-jp/guides/workflow/error-handling/saretaerrojikku.mdx @@ -0,0 +1,5 @@ +--- +title: 事前定義されたエラー処理ロジック +--- + + diff --git a/ja-jp/user-guide/build-app/flow-app/file-upload.mdx b/ja-jp/guides/workflow/file-upload.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/file-upload.mdx rename to ja-jp/guides/workflow/file-upload.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/README.mdx b/ja-jp/guides/workflow/nodes/README.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/README.mdx rename to ja-jp/guides/workflow/nodes/README.mdx diff --git a/ja-jp/guides/workflow/nodes/agent.mdx b/ja-jp/guides/workflow/nodes/agent.mdx new file mode 100644 index 00000000..04c24522 --- /dev/null +++ b/ja-jp/guides/workflow/nodes/agent.mdx @@ -0,0 +1,99 @@ +--- +title: エージェント +--- + +## 定義 + +エージェントノードは、Difyチャットフローやワークフローにおいて自律的なツール呼び出しを実現するコンポーネントです。異なるエージェント推論戦略を統合することで、大規模言語モデル(LLM)が実行時に動的にツールを選択・実行し、多段階推論を可能にします。 + +## 設定手順 + +### ノードの追加 + +チャットフローやワークフローのエディタで、コンポーネントパネルからエージェントノードをキャンバスにドラッグします。 + +エージェントノードの追加 + +### エージェント戦略の選択 + +ノード設定パネルで **エージェント戦略** をクリックします。 + +エージェント戦略設定 + +ドロップダウンメニューから推論戦略を選択します。Difyは **Function Calling と ReAct** を標準装備しており、**Marketplace → エージェント戦略** カテゴリから追加インストール可能です。 + +推論戦略選択 + +#### 1. Function Calling + +ユーザー指示を事前定義された関数/ツールにマッピングし、LLMが意図を識別→適切な関数を選択→パラメータ抽出という明確なツール呼び出しメカニズムです。 + +特徴: + +**• 高精度**: 明確なタスクに直結するツールを直接呼び出し + +**• 外部連携容易**: API/ツールを関数化して統合可能 + +**• 構造化出力**: 下流ノード処理向けの定型化された情報出力 + +Function Calling + +#### 2. ReAct(Reason + Act) + +思考(Reason)と行動(Act)を交互に繰り返す戦略です。LLMが現状分析→ツール選択→実行→結果評価のサイクルを問題解決まで継続します。 + +特徴: + +**• 外部リソース活用**: モデル単体では困難なタスクを実行可能 + +**• 処理追跡性**: 思考プロセスが可視化され説明性が向上 + +**• 広範な適用**: Q&A/情報検索/タスク実行など多様なシナリオに対応 + +ReAct戦略 + +開発者は公開[リポジトリ](https://github.com/langgenius/dify-plugins)へ戦略プラグインを提供可能で、審査後Marketplaceで公開されます。 + +### ノードパラメータ設定 + +選択した戦略に応じた設定項目が表示されます。標準装備のFunction Calling/ReActでは以下を設定: + +1. **モデル**: エージェントを駆動するLLMを選択 +2. **ツールリスト**: 「+」で呼び出し可能ツールを追加 + * 検索: インストール済みツールから選択 + * 認証: APIキーなどの認証情報を入力 + * 説明とパラメータ: ツールの用途説明とパラメータ設定 +3. **指示文**: タスク目標とコンテキストを定義(Jinja構文で上位ノード変数参照可) +4. **クエリ**: ユーザー入力を受け取る変数 +5. **最大実行ステップ数**: 処理サイクルの上限値 +6. **出力変数**: ノードが出力するデータ構造 + +## ログ確認 + +実行時には詳細なログが生成されます。基本情報(入出力/トークン使用量/処理時間/状態)に加え、「詳細」から各処理ステップの出力を確認可能です。 + +ログ確認 \ No newline at end of file diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/answer.mdx b/ja-jp/guides/workflow/nodes/answer.mdx similarity index 95% rename from ja-jp/user-guide/build-app/flow-app/nodes/answer.mdx rename to ja-jp/guides/workflow/nodes/answer.mdx index ea4fd253..b8aea582 100644 --- a/ja-jp/user-guide/build-app/flow-app/nodes/answer.mdx +++ b/ja-jp/guides/workflow/nodes/answer.mdx @@ -1,6 +1,5 @@ --- -title: 直接返信 -version: '日本語' +title: 回答 --- ### 定義 diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/code.mdx b/ja-jp/guides/workflow/nodes/code.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/code.mdx rename to ja-jp/guides/workflow/nodes/code.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/doc-extractor.mdx b/ja-jp/guides/workflow/nodes/doc-extractor.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/doc-extractor.mdx rename to ja-jp/guides/workflow/nodes/doc-extractor.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/end.mdx b/ja-jp/guides/workflow/nodes/end.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/end.mdx rename to ja-jp/guides/workflow/nodes/end.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/http-request.mdx b/ja-jp/guides/workflow/nodes/http-request.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/http-request.mdx rename to ja-jp/guides/workflow/nodes/http-request.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/ifelse.mdx b/ja-jp/guides/workflow/nodes/ifelse.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/ifelse.mdx rename to ja-jp/guides/workflow/nodes/ifelse.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/iteration.mdx b/ja-jp/guides/workflow/nodes/iteration.mdx similarity index 99% rename from ja-jp/user-guide/build-app/flow-app/nodes/iteration.mdx rename to ja-jp/guides/workflow/nodes/iteration.mdx index 10d6e97a..16e2bb56 100644 --- a/ja-jp/user-guide/build-app/flow-app/nodes/iteration.mdx +++ b/ja-jp/guides/workflow/nodes/iteration.mdx @@ -1,6 +1,5 @@ --- -title: イテレーション -version: '日本語' +title: 反復処理(イテレーション) --- ### 定義 diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/knowledge-retrieval.mdx b/ja-jp/guides/workflow/nodes/knowledge-retrieval.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/knowledge-retrieval.mdx rename to ja-jp/guides/workflow/nodes/knowledge-retrieval.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/list-operator.mdx b/ja-jp/guides/workflow/nodes/list-operator.mdx similarity index 99% rename from ja-jp/user-guide/build-app/flow-app/nodes/list-operator.mdx rename to ja-jp/guides/workflow/nodes/list-operator.mdx index da2e6a9d..2e43bcab 100644 --- a/ja-jp/user-guide/build-app/flow-app/nodes/list-operator.mdx +++ b/ja-jp/guides/workflow/nodes/list-operator.mdx @@ -1,6 +1,5 @@ --- -title: リスト操作 -version: '日本語' +title: リスト処理 --- リスト変数は、文章、画像、音声、映像など、さまざまなファイルを同時にアップロードすることができます。ユーザーがファイルをアップロードすると、すべてのファイルが同じ `Array[File]` 配列変数に保存されますが、**その後の個別ファイルの処理が難しくなります。** diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/llm.mdx b/ja-jp/guides/workflow/nodes/llm.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/llm.mdx rename to ja-jp/guides/workflow/nodes/llm.mdx diff --git a/ja-jp/guides/workflow/nodes/loop.mdx b/ja-jp/guides/workflow/nodes/loop.mdx new file mode 100644 index 00000000..20cffeb7 --- /dev/null +++ b/ja-jp/guides/workflow/nodes/loop.mdx @@ -0,0 +1,91 @@ +--- +title: 繰り返し処理(ループ) +--- + +## 概要 + +繰り返し処理(ループ)ノードは、前回の結果に依存する反復タスクを実行し、終了条件を満たすか最大繰り返し回数に達するまで継続します。 + +## 繰り返し処理ノードと反復処理ノードの違い + + + + + + + + + + + + + + + + + + + + + +
タイプ特徴用途
繰り返し処理(ループ)各回の処理が前回の結果に依存する。再帰処理や最適化問題など、前回の計算結果を必要とする処理に適している。
反復処理(イテレーション)各回の処理は独立しており、前回の結果に依存しない。データの一括処理など、各処理を独立して実行できるタスクに適している。
+ +## 繰り返し処理(ループ)ノードの設定方法 + + + + + + + + + + + + + + + + + + + + + +
パラメータ説明
ループ終了条件ループを終了するタイミングを決定する式x < 50error_rate < 0.01
最大繰り返し回数(Maximum Loop Count)無限ループを防ぐための繰り返し回数の上限10、100、1000
+ +ループノードの設定画面 + +## 使用例 + +**目標:50未満の値が出るまで、1から100までのランダムな数値を生成する。** + +**実装手順**: + +1. `code`ノードを使用して1-100の間のランダムな数値を生成します。 + +2. `if`ノードを使用して数値を評価します: + - 50未満の場合:`done`を出力してループを終了します。 + - 50以上の場合:ループを継続し、別のランダムな数値を生成します。 + +3. ループ終了条件を「ランダム数値 < 50」に設定します。 + +4. 50未満の数値が出現したらループは自動的に終了します。 + +ループノードの使用例 + +## 今後の拡張 + +**今後のリリースには以下の機能が追加される予定です:** + +- ループ変数:繰り返し間で値を保存・参照できるようにし、状態管理と条件付きロジックを強化します。 + +- `break`ノード:実行パス内からループを直接終了できるようにし、より高度な制御フローパターンを実現します。 \ No newline at end of file diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/parameter-extractor.mdx b/ja-jp/guides/workflow/nodes/parameter-extractor.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/parameter-extractor.mdx rename to ja-jp/guides/workflow/nodes/parameter-extractor.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/question-classifier.mdx b/ja-jp/guides/workflow/nodes/question-classifier.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/question-classifier.mdx rename to ja-jp/guides/workflow/nodes/question-classifier.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/start.mdx b/ja-jp/guides/workflow/nodes/start.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/start.mdx rename to ja-jp/guides/workflow/nodes/start.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/template.mdx b/ja-jp/guides/workflow/nodes/template.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/template.mdx rename to ja-jp/guides/workflow/nodes/template.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/tools.mdx b/ja-jp/guides/workflow/nodes/tools.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/tools.mdx rename to ja-jp/guides/workflow/nodes/tools.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/variable-aggregation.mdx b/ja-jp/guides/workflow/nodes/variable-aggregator.mdx similarity index 98% rename from ja-jp/user-guide/build-app/flow-app/nodes/variable-aggregation.mdx rename to ja-jp/guides/workflow/nodes/variable-aggregator.mdx index 53bbb854..57f6f355 100644 --- a/ja-jp/user-guide/build-app/flow-app/nodes/variable-aggregation.mdx +++ b/ja-jp/guides/workflow/nodes/variable-aggregator.mdx @@ -1,6 +1,5 @@ --- -title: 变量聚合 -version: '日本語' +title: 変数集約 --- ### 定義 diff --git a/ja-jp/user-guide/build-app/flow-app/nodes/variable-assigner.mdx b/ja-jp/guides/workflow/nodes/variable-assigner.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/nodes/variable-assigner.mdx rename to ja-jp/guides/workflow/nodes/variable-assigner.mdx diff --git a/ja-jp/user-guide/build-app/flow-app/orchestrate-node.mdx b/ja-jp/guides/workflow/orchestrate-node.mdx similarity index 66% rename from ja-jp/user-guide/build-app/flow-app/orchestrate-node.mdx rename to ja-jp/guides/workflow/orchestrate-node.mdx index bbac1471..558b0ecd 100644 --- a/ja-jp/user-guide/build-app/flow-app/orchestrate-node.mdx +++ b/ja-jp/guides/workflow/orchestrate-node.mdx @@ -1,17 +1,14 @@ --- title: オーケストレートノード -version: '日本語' --- -チャットフローおよびワークフローアプリケーションは、ビジュアルなドラッグアンドドロップ機能を通じてノードのオーケストレーションをサポートしており、**シリアル**および**パラレル**の2つのオーケストレーションデザインパターンがあります。 +チャットフローおよびワークフローアプリケーションは、ビジュアルなドラッグアンドドロップ機能を通じてノードのオーケストレーションをサポートしており、シリアルおよびパラレルの2つのオーケストレーションデザインパターンがあります。 - - 串行和并行节点流对比图 - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/3984e13db72e2bd19870f5764ec000cf.jpeg) ## シリアルノードのデザインパターン -このパターンでは、ノードはあらかじめ定義された順序で順次実行されます。各ノードは、前のノードがタスクを完了し、出力を生成した後にのみ操作を開始します。これにより、**タスクが論理的な順序で実行されることが保証されます**。 +このパターンでは、ノードはあらかじめ定義された順序で順次実行されます。各ノードは、前のノードがタスクを完了し、出力を生成した後にのみ操作を開始します。これにより、タスクが論理的な順序で実行されることが保証されます。 シリアルパターンを実装した「小説生成」ワークフローアプリケーションを考えてみましょう。ユーザーが小説のスタイル、リズム、キャラクターを入力した後、LLMが順番に小説の概要、プロット、エンディングを完成させます。各ノードは前のノードの出力に基づいて動作し、小説のスタイルに一貫性をもたらします。 @@ -19,19 +16,15 @@ version: '日本語' 1. 2つのノードの間にある「+」アイコンをクリックして新しいシリアルノードを挿入します。 2. ノードを順次リンクします。 -3. すべてのパスを「End」ノードに収束させて、ワークフローを最終承認します。 +3. すべてのパスを「終了」ノード(ワークフロー)/「直接回答」ノード(チャットフロー)に収束させて、ワークフローを最終承認します。 - - 串行结构设计示意图 - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/e8e884e146994b5f95cb16ec31cdd81b.png) ### シリアル構造のアプリのログをチェックする シリアル構造のアプリは、ログが順次ノードの操作を表示します。会話ボックスの右上にある "View Logs - Tracing" を順にクリックすると、各ノードの入力、出力、トークン消費、実行時間を含む完全なワークフロープロセスが表示されます。 - - 串行结构应用日志界面 - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/1707ee3651f154fcb90c882a2aeab6e9.png) ## パラレルノードのデザインパターン @@ -39,58 +32,38 @@ version: '日本語' パラレルアーキテクチャを実装した翻訳ワークフローアプリケーションを考えてみましょう。ユーザーがソーステキストを入力してワークフローをトリガーすると、パラレル構造内のすべてのノードが前のノードから同時に命令を受け取ります。これにより、複数の言語への同時翻訳が可能となり、全体の処理時間が大幅に短縮されます。 - - 并行设计示意图 - - ### パラレルノードのデザインパターン 次の4つの方法は、ノードの追加やビジュアル操作を通じてパラレル構造を作成する方法を示しています: -**方法1** - +**方法1**\ ノードの上にカーソルを合わせると「+」ボタンが表示されます。クリックすると、複数のノードが追加され、自動的にパラレル構造が形成されます。 - - 新建并行结构方式1 - - -**方法2** +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/b93ff4b81f2a5526a8787aa1e9fb314d.png) +**方法2**\ ノードから接続を延長するには、ノードの「+」ボタンをドラッグしてパラレル構造を作成します。 - - 新建并行结构方式2 - - -**方法3** +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/8deebdb38e3848966ed667e6ed97bdce.png) +**方法3**\ キャンバス上に複数のノードがある場合は、ビジュアルにドラッグしてリンクし、パラレル構造を形成します。 - - 新建并行结构方式3 - - -**方法4** +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/3997bca0f5efa1a3c4a214dbe3ed1f0c.png) +**方法4**\ キャンバスベースの方法に加えて、ノードの右側パネルの「Next Step」セクションからノードを追加することで、パラレル構造を生成することもできます。このアプローチにより、自動的にパラレル構成が作成されます。 - - 新建并行结构方式4 - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/516d11ac4c75ecf394da371f76feb89c.png) - -**Tips:** +**注意:** * 任意のノードがパラレル構造の下流ノードとして機能します。 * ワークフローアプリケーションには、単一かつ一意な「end」ノードが必要です。 * チャットフローアプリケーションでは複数の「answer」ノードがサポートされます。これらのアプリケーションの各パラレル構造は、適切なコンテンツの出力を確保するために「answer」ノードで終了する必要があります。 * すべてのパラレル構造は同時に実行されます。パラレル構造内のノードは、タスクを完了した後に結果を出力し、出力には順序関係がありません。パラレル構造が単純であればあるほど、結果の出力が速くなります。 - - - Chatflow 应用中的并行结构示例 - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/1d0884e4c9bfa548d84849871719d646.png) ### パラレル構造の作り方 @@ -100,11 +73,9 @@ version: '日本語' 通常のパラレルは、「開始 | パラレルノード | 終了」の3階層関係を指します。この構造は直感的で、ユーザー入力後に複数のタスクを同時に実行できます。 -> パラレルブランチの上限は10です。 +パラレルブランチの上限は10です。 - - 普通并行结构示例 - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/5ba85864454880561ec95a37db382f20.png) #### 2. ネストされたパラレル @@ -112,30 +83,22 @@ version: '日本語' ワークフローは、最大3層までのネスト関係をサポートします。 - - 嵌套并行结构示例 - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/036f9fcfb1d0f8dedbd34e90ebb64c29.png) #### 3. 条件分岐 + パラレル パラレル構造は条件分岐と組み合わせて使用することもできます。 - - 条件分支和并行结构结合示例 - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/d28637a39327032fa333fd49b9dd2e73.png) #### 4. イテレーション + パラレル このパターンは、イテレーションとパラレル構造を組み合わせたものです。 - - 迭代分支和并行结构结合示例 - +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/bc06917031cf52c2e3d8cf9fe8a8dc8b.png) ### パラレル構造のアプリのログをチェックする パラレル構造をもつアプリケーションは、ツリーのような形式でログを生成します。折りたたみ可能なパラレルノード グループにより、個々のノード ログを簡単に表示できます。 - - 并行结构应用日志界面 - \ No newline at end of file +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/ad6acb838b58d2e0c8f99669b24aa20d.png) diff --git a/ja-jp/guides/workflow/publish.mdx b/ja-jp/guides/workflow/publish.mdx new file mode 100644 index 00000000..bce63d34 --- /dev/null +++ b/ja-jp/guides/workflow/publish.mdx @@ -0,0 +1,24 @@ +--- +title: アプリケーション公開 +--- + +デバッグが完了したら、右上の「公開する」をクリックして、このワークフローを保存し、さまざまなタイプのアプリとして素早く公開することができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/guides/workflow/ea40850e9b8cc216b540362a7425ac5c.png) + +対話型アプリは以下の形式で公開できます: + +* 直接実行 +* Webサイトに埋め込む +* APIアクセス + +ワークフローアプリは以下の形式で公開できます: + +* 直接実行 +* バッチ処理 +* APIアクセス +* ツールとして公開 + + +複数バージョンのチャットフローやワークフローを管理する場合は、[バージョン管理](https://docs.dify.ai/ja-jp/guides/management/version-control)を参照してください。 + diff --git a/ja-jp/guides/workflow/shortcut-key.mdx b/ja-jp/guides/workflow/shortcut-key.mdx new file mode 100644 index 00000000..046b2651 --- /dev/null +++ b/ja-jp/guides/workflow/shortcut-key.mdx @@ -0,0 +1,24 @@ +--- +title: ショートカットキー +--- + +チャットフロー / ワークフロー アプリのオーケストレーションページでは、ノードのアッランジの効率を向上させるために次のショートカットキーがサポートされています。 + +| Windows | macOS | 説明    | +| ---------------- | ------------------- | ------------------------------ | +| Ctrl + C | Command + C | ノードをコピーする | +| Ctrl + V | Command + V | ノードを貼り付ける | +| Ctrl + D | Command + D | ノードを重複する   | +| Ctrl + O | Command + O | ノードを整理する     | +| Ctrl + Z | Command + Z | 操作を元に戻す   | +| Ctrl + Y | Command + Y | 元に戻した操作をやり直す | +| Ctrl + Shift + Z | Command + Shift + Z | 元に戻した操作をやり直す | +| Ctrl + 1 | Command + 1 | キャンバスフィットビュー | +| Ctrl + (-) | Command + (-) | キャンバスが縮小されます | +| Ctrl + (=) | Command + (=) | キャンバスが拡大されます | +| Shift + 1 | Shift + 1 | キャンバスを100%にリセットします  | +| Shift + 5 | Shift + 5 | キャンバスを50%に縮小されます   | +| H | H | ハンドモードに切り替わります | +| V | V | ポインターモードに切り替わります  | +| Delete/Backspace | Delete/Backspace | 選択したノードを削除します | +| Alt + R | Option + R | ワークフローを実行します   | diff --git a/ja-jp/user-guide/build-app/flow-app/variables.mdx b/ja-jp/guides/workflow/variables.mdx similarity index 100% rename from ja-jp/user-guide/build-app/flow-app/variables.mdx rename to ja-jp/guides/workflow/variables.mdx diff --git a/ja-jp/guides/workspace/app.mdx b/ja-jp/guides/workspace/app.mdx new file mode 100644 index 00000000..8b8a7a58 --- /dev/null +++ b/ja-jp/guides/workspace/app.mdx @@ -0,0 +1,23 @@ +--- +title: 発見 +--- + +## テンプレートアプリケーションの使用 + +**探索 > 発見** では、いくつかの一般的なテンプレートアプリケーションを提供しています。これらのアプリケーションは、翻訳、ライティング、プログラミング、アシスタントなどをカバーしています。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/workspace/0e031be438f3099fdb1680ccc9799a6d.jpeg) + +テンプレートアプリケーションを使用したい場合は、テンプレート上の「ワークスペースに追加」ボタンをクリックしてください。これで左側のワークスペースでそのアプリケーションを使用することができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/workspace/01216f0ccbd56855c41eddce194cbf48.jpeg) + +新しいアプリケーションを作成するためにテンプレートを修正したい場合は、テンプレート上の「カスタマイズ」ボタンをクリックしてください。 + +## ワークスペース + +ワークスペースはアプリケーションのナビゲーションです。ワークスペースでアプリケーションをクリックすると、そのアプリケーションを直接使用することができます。 + +![](https://assets-docs.dify.ai/dify-enterprise-mintlify/jp/workspace/aa95de9ca884480c7c52a6ee7239de1d.jpeg) + +ワークスペースには、あなた自身のアプリケーションや他のチームメンバーがワークスペースに追加したアプリケーションが含まれています。 \ No newline at end of file diff --git a/ja-jp/introduction.mdx b/ja-jp/introduction.mdx index a754bea3..be1efd09 100644 --- a/ja-jp/introduction.mdx +++ b/ja-jp/introduction.mdx @@ -1,6 +1,5 @@ --- title: Difyエンタプライス版へようこそ -version: '日本語' --- Dify エンタープライズ版は、大規模な組織やチーム向けのプライベートデプロイメントAIミドルウェアソリューションであり、企業内でのAI+時代への移行を促進することを目的としています。 diff --git a/ja-jp/management/app-management.mdx b/ja-jp/management/app-management.mdx deleted file mode 100644 index 5f84fced..00000000 --- a/ja-jp/management/app-management.mdx +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: アプリの管理 -version: '日本語' ---- - -## [アプリ情報の編集](#edit-app-info) - -アプリを作成した後に、アプリ名や説明を変更したい場合は、アプリの左上隅にある「情報の編集」をクリックしてください。これにより、アプリのアイコン、名前、または説明を修正できます。 - - - -## [アプリの複製](#copy-app) - -すべてのアプリは複製が可能です。アプリの左上隅にある「複製」をクリックしてください。 - -## [アプリのエクスポート](#export-app) - -Difyで作成されたアプリはDSL形式でエクスポートをサポートしており、設定ファイルを任意のDifyチームに自由にインポートできます。 - -DSLファイルは次の2つの方法でエクスポートできます: - -* シナリオページ中のアプリカードの右下隅の"DSLをエクスポート"をクリックする。 -* アプリ内のオーケストレートページに入れるあど、左上隅の"DSLをエクスポート"のボタンをクリックする。 - -![](/ja-jp/img/37a012ebb30449ccebcb29f3ee01d62f.png) - -DSLファイルは以下の機密情報を含まれません: - -* APIキーなどの第三者ツールの認証情報 -* 環境変数に`Secret`が含まれる場合、DSLをエクスポートするときに機密情報のエクスポートを許可するかどうかを尋ねるメッセージが表示されます。 - -![](/ja-jp/img/9d2b1f92367982fa4416e07c5b5669cc.png) - - -Dify DSLは、Dify.AIによってv0.6以降で定義されたAIアプリエンジニアリングファイル標準です。ファイル形式はYMLで、アプリの基本的な説明、モデルパラメータ、オーケストレーション構成などをカバーしています。 - - -## アプリの削除 - -アプリを削除したい場合は、アプリの左上隅にある「削除」をクリックしてください。 - - -⚠️ アプリの削除は取り消すことができません。すべてのユーザーがあなたのアプリにアクセスできなくなり、アプリ内のすべてのプロンプト、オーケストレーション構成、ログが削除されます。 - diff --git a/ja-jp/management/personal-account-management.mdx b/ja-jp/management/personal-account-management.mdx deleted file mode 100644 index 1d730f3b..00000000 --- a/ja-jp/management/personal-account-management.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: 個人アカウントの管理 -version: '日本語' ---- - -## 個人情報の変更 - -次の詳細を変更できます: - -* アバター -* ユーザー名 -* メールアドレス -* パスワード - - - -### ログインをバインドする方法 - -GitHubやGoogleアカウントのアカウントを利用しDifyチームにログインができます。これらを設定するには、Difyチームのホームページで右上隅のアバターをクリックし、**「統合」** を選択してください。 - -### 表示言語の変更 - -表示言語を変更するには、Difyチームのホームページで右上隅のアバターをクリックし、**「言語」** を選択します。Difyは以下の言語をサポートしています: - -* 英語 -* 中国語(簡体字) -* 中国語(繁体字) -* ポルトガル語(ブラジル) -* フランス語(フランス) -* 日本語(日本) -* 韓国語(韓国) -* ロシア語(ロシア) -* イタリア語(イタリア) -* タイ語(タイ) -* インドネシア語 -* ウクライナ語(ウクライナ) - -Difyはコミュニティのボランティアによる追加の言語バージョンの提供を歓迎しています。貢献をご希望の方は、[GitHubリポジトリ](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)をご覧ください。 diff --git a/ja-jp/management/team-members-management.mdx b/ja-jp/management/team-members-management.mdx deleted file mode 100644 index 7d0e4590..00000000 --- a/ja-jp/management/team-members-management.mdx +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: チームメンバーの管理 -version: '日本語' ---- - -このガイドでは、Difyチーム内のメンバーを管理する方法について説明します。 - -### メンバーの追加 - - -チームの所有者のみがチームメンバーを追加する権限を持っています。 - - -メンバーを追加するには、チームの所有者は右上隅のアバターをクリックし、**"メンバー"** → **"追加"** を選択します。メールアドレスを入力し、メンバー権限を割り当ててプロセスを完了します。 - - - - - -追加されたメンバーは、URLリンクまたはメール招待を通じて登録を完了することができます。 - -### メンバーの権限 - -チームメンバーは、所有者、管理人、編集者、メンバーに分類されます。 - -* **所有者** - * ロールの説明: チームの最初のメンバーで、最も高いレベルの権限を持ち、チーム全体の運営と管理を担当します。 - * 権限の概要: チームメンバーの管理、メンバー権限の調整、モデルプロバイダーの設定、アプリケーションの作成と削除、ナレッジベースの作成、ツールライブラリの設定などの権限を持ちます。 -* **管理人** - * ロールの説明: チームの管理人で、チームメンバーとモデルプロバイダーの管理を担当します。 - * 権限の概要: メンバー権限を調整することはできませんが、チームメンバーの追加や削除、モデルプロバイダーの設定、アプリケーションの作成、編集、削除、ナレッジベースの作成、ツールライブラリの設定などの権限を持ちます。 -* **編集者** - * ロールの説明: 通常のチームメンバーで、共同でアプリケーションの作成と編集を担当します。 - * 権限の概要: チームメンバーの管理、モデルプロバイダーの設定、ツールライブラリの設定はできません。アプリケーションの作成、編集、削除、ナレッジベースの作成などの権限を持ちます。 -* **メンバー** - * ロールの説明: 通常のチームメンバーで、チーム内で作成されたアプリケーションの閲覧と使用のみが許可されます。 - * 権限の概要: チーム内でのアプリケーションの使用とツールの使用のみが許可されます。 - -### メンバーの削除 - - -チームの所有者のみがチームメンバーを削除する権限を持っています。 - - -メンバーを削除するには、Difyチームのホームページの右上隅のアバターをクリックし、**"設定"** → **"メンバー"** に移動し、削除するメンバーを選択して **"チームから削除"** をクリックします。 - - - - - -### よくある質問 - -#### 1. チームオーナーを変更するにはどうすればよいですか? - -チームオーナーは最高権限を持ち、チーム構造の安定性を維持するため、一度設定されたチームオーナーは手動で変更することができません。 - -#### 2. チームを削除するにはどうすればよいですか? - -チームデータのセキュリティ上の理由から、チームオーナーは自身のチームを自己削除することはできません。 - -#### 3. チームメンバーのアカウントを削除するにはどうすればよいですか? - -チームオーナー/管理者はチームメンバーのアカウントを削除することはできません。アカウントの削除はアカウント所有者自身が申請する必要があり、他者が削除することはできません。アカウントを削除する代わりに、メンバーをチームから削除することで、そのユーザーのチームへのアクセス権限を無効にすることができます。 diff --git a/scripts/auto-url-check.py b/scripts/auto-url-check.py new file mode 100644 index 00000000..32db80b7 --- /dev/null +++ b/scripts/auto-url-check.py @@ -0,0 +1,757 @@ +#!/usr/bin/env python3 +""" +多线程版GitBook链接检查器 + +此脚本使用多线程并行检查在线链接,大幅提高检查速度。 +生成两个报告文件: +1. 包含所有链接的完整报告 +2. 仅包含错误链接的报告 +""" + +import os +import re +import sys +import time +import threading +import queue +from concurrent.futures import ThreadPoolExecutor +from collections import defaultdict +from urllib.parse import urlparse + +try: + import requests + from requests.exceptions import RequestException +except ImportError: + print("正在安装requests库...") + import subprocess + subprocess.check_call([sys.executable, "-m", "pip", "install", "requests"]) + import requests + from requests.exceptions import RequestException + +class LinkChecker: + def __init__(self, summary_path, base_dir=None, verify_online=True, max_threads=10): + """ + 初始化链接检查器 + + Args: + summary_path: SUMMARY.md文件路径 + base_dir: 文档根目录,默认为SUMMARY.md所在目录 + verify_online: 是否验证在线链接 + max_threads: 最大线程数 + """ + self.summary_path = os.path.abspath(summary_path) + self.base_dir = base_dir or os.path.dirname(self.summary_path) + self.verify_online = verify_online + self.max_threads = max_threads + self.summary_links = [] # SUMMARY.md中的链接 + self.md_links = defaultdict(list) # 每个文档中引用的链接 + self.processed_files = set() # 已处理的文件 + self.summary_content = "" # SUMMARY.md的内容 + self.invalid_links = [] # 存储所有无效链接 + + # 图片文件扩展名 + self.image_extensions = ('.png', '.jpg', '.jpeg', '.gif', '.svg', '.bmp', '.tiff', '.webp') + + # 在线链接缓存,避免重复检查 + self.online_link_cache = {} + self.online_link_cache_lock = threading.Lock() # 线程安全的缓存锁 + + # 用于存储待检查的在线链接 + self.online_links_queue = queue.Queue() + + # 进度统计 + self.total_online_links = 0 + self.checked_online_links = 0 + self.progress_lock = threading.Lock() + + def is_image_link(self, link): + """ + 检查链接是否为图片链接 + + Args: + link: 链接路径 + + Returns: + is_image: 是否为图片链接 + """ + return link.lower().endswith(self.image_extensions) + + def check_online_link(self, url): + """ + 检查在线链接是否有效 + + Args: + url: 在线链接URL + + Returns: + is_valid: 链接是否有效 + """ + # 如果已经检查过,直接返回缓存结果 + with self.online_link_cache_lock: + if url in self.online_link_cache: + return self.online_link_cache[url] + + if not self.verify_online: + # 如果不验证在线链接,默认返回无效 + with self.online_link_cache_lock: + self.online_link_cache[url] = False + return False + + try: + # 先尝试HEAD请求,速度更快 + response = requests.head( + url, + timeout=5, + allow_redirects=True, + headers={'User-Agent': 'Mozilla/5.0 GitBook-Link-Checker/1.0'} + ) + + if response.status_code < 400: + # 状态码小于400,认为链接有效 + with self.online_link_cache_lock: + self.online_link_cache[url] = True + return True + + # HEAD请求失败,尝试GET请求 + response = requests.get( + url, + timeout=5, + allow_redirects=True, + headers={'User-Agent': 'Mozilla/5.0 GitBook-Link-Checker/1.0'} + ) + + result = response.status_code < 400 + with self.online_link_cache_lock: + self.online_link_cache[url] = result + return result + + except RequestException: + # 请求异常,链接无效 + with self.online_link_cache_lock: + self.online_link_cache[url] = False + return False + + def resolve_path(self, link, current_dir): + """ + 解析链接的实际路径 + + Args: + link: 链接路径 + current_dir: 当前文件所在目录 + + Returns: + resolved_path: 解析后的路径 + is_external: 是否为外部链接 + is_valid: 链接是否有效 + """ + if not link: + return None, False, False + + # 处理锚点链接 + if '#' in link: + link_part = link.split('#')[0] + if not link_part: # 如果只有锚点,没有路径部分 + return None, False, True # 假设内部锚点是有效的 + link = link_part + + # 检查是否为图片链接 + if self.is_image_link(link): + return None, False, True # 跳过图片链接,并假设它们是有效的 + + # 处理外部链接 + if link.startswith(('http://', 'https://', 'mailto:', 'tel:')): + # 如果是http/https链接,加入待检查队列 + if link.startswith(('http://', 'https://')) and self.verify_online: + # 将链接添加到待检查队列 + self.online_links_queue.put(link) + with self.progress_lock: + self.total_online_links += 1 + + # 暂时返回未知状态,后续会更新 + return link, True, None + elif link.startswith(('http://', 'https://')) and not self.verify_online: + # 如果不验证在线链接,标记为错误 + return link, True, False + else: + # mailto和tel链接默认有效 + return link, True, True + + # 处理绝对路径 (从文档根目录开始) + if link.startswith('/'): + resolved_path = os.path.normpath(os.path.join(self.base_dir, link.lstrip('/'))) + # 处理相对路径 (从当前文件所在目录开始) + else: + resolved_path = os.path.normpath(os.path.join(current_dir, link)) + + # 处理目录链接 + if os.path.isdir(resolved_path): + readme_path = os.path.join(resolved_path, 'README.md') + if os.path.exists(readme_path): + return readme_path, False, True + index_path = os.path.join(resolved_path, 'index.md') + if os.path.exists(index_path): + return index_path, False, True + # 如果没有README.md或index.md,保持原样 + return resolved_path, False, os.path.exists(resolved_path) + + # 处理不带扩展名的文件引用 + if not os.path.exists(resolved_path) and '.' not in os.path.basename(resolved_path): + md_path = f"{resolved_path}.md" + if os.path.exists(md_path): + return md_path, False, True + + return resolved_path, False, os.path.exists(resolved_path) + + def online_link_worker(self): + """工作线程:处理在线链接检查""" + while True: + try: + # 从队列获取链接 + url = self.online_links_queue.get(block=False) + + # 检查链接 + is_valid = self.check_online_link(url) + + # 更新进度 + with self.progress_lock: + self.checked_online_links += 1 + checked = self.checked_online_links + total = self.total_online_links + + # 显示进度 + print(f"在线链接检查进度: [{checked}/{total}] - {url} - {'✅' if is_valid else '❌'}") + + # 标记任务完成 + self.online_links_queue.task_done() + except queue.Empty: + # 队列为空,退出线程 + break + + def extract_sections_from_summary(self): + """ + 从SUMMARY.md提取所有章节信息 + + Returns: + sections: 章节列表 + """ + print(f"从 {self.summary_path} 提取章节信息...") + + try: + with open(self.summary_path, 'r', encoding='utf-8') as file: + self.summary_content = file.read() + except Exception as e: + print(f"读取文件时出错: {e}") + sys.exit(1) + + # 提取所有章节标题 + sections = [] + section_pattern = r'^#+\s+(.*?)(?:\s+)?$' + + for line in self.summary_content.split('\n'): + match = re.match(section_pattern, line) + if match: + section_title = match.group(1).strip() + sections.append(section_title) + + return sections + + def extract_links_from_summary(self): + """ + 从SUMMARY.md提取所有链接及其层级结构 + + Returns: + links: 链接列表,每项包含链接信息和层级 + """ + print(f"从 {self.summary_path} 提取链接...") + + # 记录当前所在章节 + current_section = "" + sections = self.extract_sections_from_summary() + + # 按行处理SUMMARY文件 + links = [] + + for line in self.summary_content.split('\n'): + # 检查是否是章节标题行 + section_match = re.match(r'^#+\s+(.*?)(?:\s+)?$', line) + if section_match: + current_section = section_match.group(1).strip() + continue + + # 检查缩进级别 + indent_match = re.match(r'^(\s*)\*', line) + if not indent_match: + continue + + indent = indent_match.group(1) + level = len(indent) // 2 # 假设每级缩进是2个空格 + + # 提取链接 + link_match = re.search(r'\[([^\]]+)\]\(([^)]+)\)', line) + if not link_match: + continue + + text, link = link_match.groups() + + # 跳过只有锚点的链接 + if link.startswith('#'): + continue + + # 解析实际文件路径 + file_path, is_external, is_valid = self.resolve_path(link, self.base_dir) + + # 添加链接 + link_info = { + 'text': text, + 'link': link, + 'file_path': file_path, + 'exists': is_valid, + 'level': level, + 'section': current_section, + 'is_external': is_external, + 'children': [], # 用于存储子链接 + 'source_file': 'SUMMARY.md' + } + + links.append(link_info) + + # 如果链接无效,添加到无效链接列表 + if is_valid is False: # 注意:is_valid可能为None(在线链接待检查) + self.invalid_links.append(link_info) + + # 构建层级结构 + root_links = [] + level_stack = [None] # 用于跟踪每个级别的最后一个链接 + + for link in links: + level = link['level'] + + # 调整栈以匹配当前级别 + while len(level_stack) > level + 1: + level_stack.pop() + + # 扩展栈以匹配当前级别 + while len(level_stack) < level + 1: + level_stack.append(None) + + if level == 0: + # 顶级链接 + root_links.append(link) + else: + # 子链接,添加到父链接的children列表中 + parent = level_stack[level - 1] + if parent: + parent['children'].append(link) + + # 更新当前级别的最后一个链接 + level_stack[level] = link + + self.summary_links = root_links + return links + + def extract_links_from_markdown(self, file_path): + """ + 从Markdown文件中提取链接 + + Args: + file_path: Markdown文件路径 + + Returns: + links: 提取的链接列表 + """ + if not file_path or file_path in self.processed_files: + return [] + + if not os.path.exists(file_path) or not file_path.endswith('.md'): + return [] + + self.processed_files.add(file_path) + + try: + with open(file_path, 'r', encoding='utf-8') as file: + content = file.read() + except Exception as e: + print(f"读取文件 {file_path} 时出错: {e}") + return [] + + # 提取链接 + link_pattern = r'\[([^\]]+)\]\(([^)]+)\)' + matches = re.findall(link_pattern, content) + + links = [] + current_dir = os.path.dirname(file_path) + relative_source_path = os.path.relpath(file_path, self.base_dir) + + for text, link in matches: + # 检查是否为图片链接 + if self.is_image_link(link): + continue + + # 解析链接 + resolved_path, is_external, is_valid = self.resolve_path(link, current_dir) + + # 添加链接 + link_info = { + 'text': text, + 'link': link, + 'file_path': resolved_path, + 'exists': is_valid, + 'is_external': is_external, + 'source_file': relative_source_path + } + + links.append(link_info) + + # 存储到字典中,以文件路径为键 + if file_path not in self.md_links: + self.md_links[file_path] = [] + self.md_links[file_path].append(link_info) + + # 如果链接无效,添加到无效链接列表 + if is_valid is False: # 注意:is_valid可能为None(在线链接待检查) + self.invalid_links.append(link_info) + + return links + + def check_links(self): + """ + 递归检查所有链接 + """ + # 提取SUMMARY中的链接 + self.extract_links_from_summary() + + # 递归处理每个链接 + def process_link(link): + if not link.get('is_external') and link.get('exists') and link.get('file_path') and link.get('file_path').endswith('.md'): + try: + relative_path = os.path.relpath(link['file_path'], self.base_dir) + print(f"检查文件: {relative_path}") + self.extract_links_from_markdown(link['file_path']) + except Exception as e: + print(f"处理文件 {link.get('file_path')} 时出错: {e}") + + # 递归处理子链接 + for child in link.get('children', []): + process_link(child) + + # 处理所有顶级链接 + for link in self.summary_links: + process_link(link) + + # 如果需要验证在线链接,启动多线程进行检查 + if self.verify_online and self.total_online_links > 0: + self.check_online_links_with_threads() + + # 更新链接状态 + self.update_link_statuses() + + def check_online_links_with_threads(self): + """使用多线程检查在线链接""" + print(f"\n开始使用多线程检查在线链接,共有 {self.total_online_links} 个链接...") + + # 创建线程池 + num_threads = min(self.max_threads, self.total_online_links) + + with ThreadPoolExecutor(max_workers=num_threads) as executor: + # 提交任务 + futures = [executor.submit(self.online_link_worker) for _ in range(num_threads)] + + # 等待队列任务完成 + self.online_links_queue.join() + + print(f"所有在线链接检查完成,共 {self.total_online_links} 个") + + def update_link_statuses(self): + """根据检查结果更新链接状态""" + # 更新所有链接的有效性状态 + def update_link(link): + if link.get('is_external') and link.get('file_path') and link.get('file_path').startswith(('http://', 'https://')): + with self.online_link_cache_lock: + is_valid = self.online_link_cache.get(link['file_path'], False) + + link['exists'] = is_valid + + # 如果链接无效,添加到无效链接列表 + if not is_valid and link not in self.invalid_links: + self.invalid_links.append(link) + + # 递归处理子链接 + for child in link.get('children', []): + update_link(child) + + # 处理所有顶级链接 + for link in self.summary_links: + update_link(link) + + # 更新文档链接字典 + for file_path, links in self.md_links.items(): + for link in links: + if link.get('is_external') and link.get('file_path') and link.get('file_path').startswith(('http://', 'https://')): + with self.online_link_cache_lock: + is_valid = self.online_link_cache.get(link['file_path'], False) + + link['exists'] = is_valid + + # 如果链接无效,添加到无效链接列表 + if not is_valid and link not in self.invalid_links: + self.invalid_links.append(link) + + def generate_reports(self, output_path): + """ + 生成两个报告:完整报告和错误链接报告 + + Args: + output_path: 完整报告输出文件路径 + """ + # 生成完整报告 + self.generate_full_report(output_path) + + # 生成错误链接报告 + error_report_path = output_path.replace('.md', '-error.md') + if output_path == error_report_path: + error_report_path = os.path.splitext(output_path)[0] + '-error.md' + + self.generate_error_report(error_report_path) + + def generate_full_report(self, output_path): + """ + 生成包含所有链接的完整报告 + + Args: + output_path: 输出文件路径 + """ + content = "# GitBook链接检查报告(完整版)\n\n" + + # 添加章节标题说明 + content += "本报告显示了GitBook文档中的所有链接及其引用的文档。每行的格式为:\n" + content += "* [文档标题](文档链接) | [引用的文档1](链接1) | [引用的文档2](链接2) | ...\n\n" + + # 跟踪已处理的章节 + processed_sections = set() + + # 递归生成报告内容 + def generate_link_report(link, indent=""): + nonlocal content + + # 检查是否有新章节 + if 'section' in link and link['section'] and link['section'] not in processed_sections: + content += f"\n## {link['section']}\n\n" + processed_sections.add(link['section']) + + # 生成主链接 + file_path = link.get('file_path') + status = "✅" if link.get('exists', False) else "❌" + + # 基本链接信息 + content += f"{indent}* [{link['text']}]({link['link']}) {status}" + + # 添加该文档中引用的所有非图片链接 + if file_path and file_path in self.md_links and self.md_links[file_path]: + referenced_links = self.md_links[file_path] + + # 遍历文档中引用的所有链接 + for ref_link in referenced_links: + # 跳过图片链接 + if 'link' in ref_link and self.is_image_link(ref_link['link']): + continue + + ref_status = "✅" if ref_link.get('exists', False) else "❌" + content += f" | [{ref_link['text']}]({ref_link['link']}) {ref_status}" + + content += "\n" + + # 递归处理子链接 + for child in link.get('children', []): + generate_link_report(child, indent + " ") + + # 处理所有顶级链接 + for link in self.summary_links: + generate_link_report(link) + + # 保存报告 + try: + # 确保输出目录存在 + output_dir = os.path.dirname(output_path) + if output_dir and not os.path.exists(output_dir): + os.makedirs(output_dir) + + with open(output_path, 'w', encoding='utf-8') as file: + file.write(content) + + print(f"完整报告已生成: {output_path}") + except Exception as e: + print(f"写入报告时出错: {e}") + + def generate_error_report(self, output_path): + """ + 生成仅包含错误链接的报告 + + Args: + output_path: 输出文件路径 + """ + if not self.invalid_links: + print(f"没有发现无效链接,不生成错误报告") + return + + content = "# GitBook链接检查报告(仅错误链接)\n\n" + content += "本报告仅显示文档中的无效链接。每行的格式为:\n" + content += "* [文档标题](文档链接) | [无效链接](链接路径) ❌\n\n" + + # 按源文件组织无效链接 + links_by_source = defaultdict(list) + + for link in self.invalid_links: + source = link.get('source_file', 'Unknown') + links_by_source[source].append(link) + + # 按源文件添加无效链接 + for source, links in sorted(links_by_source.items()): + # 添加源文件标题 + content += f"## 来自 {source}\n\n" + + # 找到源文件在summary中的对应链接 + summary_link = None + + # 查找源文件对应的summary链接 + for link in self.extract_links_from_summary(): + if link.get('file_path') and os.path.relpath(link['file_path'], self.base_dir) == source: + summary_link = link + break + + # 如果是SUMMARY.md本身 + if source == 'SUMMARY.md': + # 添加每个无效链接 + for link in links: + status = "❌" + content += f"* [{link['text']}]({link['link']}) {status}\n" + else: + # 如果找到了源文件对应的summary链接 + if summary_link: + # 显示源文件链接和其中的无效链接 + source_status = "✅" if summary_link.get('exists', False) else "❌" + content += f"* [{summary_link['text']}]({summary_link['link']}) {source_status}" + + # 添加源文件中的无效链接 + for link in links: + content += f" | [{link['text']}]({link['link']}) ❌" + + content += "\n\n" + else: + # 没有找到源文件对应的summary链接,只显示无效链接 + for link in links: + content += f"* 来自: {source} - [{link['text']}]({link['link']}) ❌\n" + + content += "\n" + + # 保存报告 + try: + # 确保输出目录存在 + output_dir = os.path.dirname(output_path) + if output_dir and not os.path.exists(output_dir): + os.makedirs(output_dir) + + with open(output_path, 'w', encoding='utf-8') as file: + file.write(content) + + print(f"错误报告已生成: {output_path}") + except Exception as e: + print(f"写入错误报告时出错: {e}") + + +def main(): + """主函数""" + print("=" * 60) + print("多线程版GitBook链接检查器") + print("=" * 60) + + # 获取SUMMARY.md文件路径 + if len(sys.argv) > 1: + summary_path = sys.argv[1] + else: + summary_path = input("请输入SUMMARY.md文件路径: ").strip() + if not summary_path: + summary_path = os.path.join(os.getcwd(), "SUMMARY.md") + print(f"使用默认路径: {summary_path}") + + # 检查文件是否存在 + if not os.path.isfile(summary_path): + print(f"错误: 文件 '{summary_path}' 不存在") + sys.exit(1) + + # 获取基础目录 + base_dir = os.path.dirname(os.path.abspath(summary_path)) + if len(sys.argv) > 2: + base_dir = sys.argv[2] + else: + input_base_dir = input(f"请输入文档根目录 [默认: {base_dir}]: ").strip() + if input_base_dir: + base_dir = input_base_dir + + # 获取输出文件路径 + if len(sys.argv) > 3: + output_path = sys.argv[3] + else: + default_output = os.path.join(base_dir, "link-check-report.md") + output_path = input(f"请输入输出文件路径 [默认: {default_output}]: ").strip() + if not output_path: + output_path = default_output + + # 处理目录输出 + if os.path.isdir(output_path): + output_path = os.path.join(output_path, "link-check-report.md") + + # 询问是否验证在线链接 + verify_online = input("是否验证在线链接? (y/n) [默认: n]: ").strip().lower() == 'y' + + max_threads = 10 + if verify_online: + # 获取最大线程数 + try: + max_threads = int(input(f"请输入最大线程数 [默认: 10]: ").strip() or "10") + if max_threads < 1: + max_threads = 10 + print(f"线程数必须大于0,已设置为默认值10") + except ValueError: + max_threads = 10 + print(f"输入无效,已设置为默认值10") + + print(f"将使用 {max_threads} 个线程并行检查在线链接") + else: + print("未验证的在线链接将被标记为错误,并添加到错误报告中") + + start_time = time.time() + + try: + # 创建链接检查器并执行检查 + checker = LinkChecker( + summary_path=summary_path, + base_dir=base_dir, + verify_online=verify_online, + max_threads=max_threads + ) + + checker.check_links() + checker.generate_reports(output_path) + + # 统计信息 + total_files = len(checker.processed_files) + invalid_links = len(checker.invalid_links) + + end_time = time.time() + elapsed_time = end_time - start_time + + print(f"\n统计信息:") + print(f"- 检查的文件数: {total_files}") + print(f"- 无效链接数: {invalid_links}") + print(f"- 耗时: {elapsed_time:.2f} 秒") + + print("\n检查完成!") + except Exception as e: + print(f"执行过程中出错: {e}") + import traceback + traceback.print_exc() + sys.exit(1) + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/scripts/extract-gitbook-url.py b/scripts/extract-gitbook-url.py new file mode 100644 index 00000000..960d2b02 --- /dev/null +++ b/scripts/extract-gitbook-url.py @@ -0,0 +1,176 @@ +#!/usr/bin/env python3 +""" +改进的GitBook Summary链接提取器 (支持目录输出) + +此脚本从SUMMARY.md文件中提取所有内容, +保留原始的目录结构和标题, +将链接转换为在线URL(不包含.md后缀)。 +支持将输出文件放在指定目录中。 +""" + +import os +import re +import sys +import urllib.parse + +def process_summary_file(summary_path, base_url): + """ + 处理SUMMARY.md文件,保留结构并转换链接 + + Args: + summary_path: SUMMARY.md文件的路径 + base_url: 基础URL + + Returns: + processed_content: 处理后的内容 + """ + print(f"正在处理 {summary_path}...") + + try: + with open(summary_path, 'r', encoding='utf-8') as file: + content = file.read() + except Exception as e: + print(f"读取文件时出错: {e}") + sys.exit(1) + + # 确保base_url以/结尾 + if not base_url.endswith('/'): + base_url += '/' + + # 处理每一行 + lines = content.split('\n') + processed_lines = [] + + for line in lines: + # 提取行中的Markdown链接 + link_pattern = r'\[([^\]]+)\]\(([^)]+)\)' + matches = re.findall(link_pattern, line) + + processed_line = line + + # 替换每个链接 + for text, link in matches: + # 跳过锚点链接 + if link.startswith('#'): + continue + + # 构建完整URL + if not link.startswith(('http://', 'https://')): + if link.startswith('/'): + link = link[1:] + full_url = urllib.parse.urljoin(base_url, link) + else: + full_url = link + + # 移除.md后缀 + if full_url.endswith('.md'): + full_url = full_url[:-3] + + # 替换链接 + original_link = f"[{text}]({link})" + new_link = f"[{text}]({full_url})" + processed_line = processed_line.replace(original_link, new_link) + + processed_lines.append(processed_line) + + return '\n'.join(processed_lines) + + +def save_to_markdown(content, output_path): + """ + 保存处理后的内容到Markdown文件 + + Args: + content: 处理后的内容 + output_path: 输出文件路径 + """ + # 检查路径是否是目录 + if os.path.isdir(output_path): + # 如果是目录,在该目录中创建默认文件名 + output_file = os.path.join(output_path, "gitbook-urls.md") + else: + # 否则使用提供的路径 + output_file = output_path + + # 确保输出目录存在 + output_dir = os.path.dirname(output_file) + if output_dir and not os.path.exists(output_dir): + try: + os.makedirs(output_dir) + print(f"已创建目录: {output_dir}") + except Exception as e: + print(f"创建目录时出错: {e}") + sys.exit(1) + + try: + with open(output_file, 'w', encoding='utf-8') as file: + file.write(content) + print(f"Markdown文件已生成: {output_file}") + except Exception as e: + print(f"写入文件时出错: {e}") + sys.exit(1) + + +def add_header(content): + """ + 向内容添加标题和说明 + + Args: + content: 原始内容 + + Returns: + new_content: 添加标题和说明后的内容 + """ + header = "# GitBook文档链接\n\n" + header += "以下是从SUMMARY.md提取的文档结构和链接:\n\n" + + return header + content + + +if __name__ == "__main__": + print("=" * 60) + print("改进的GitBook Summary链接提取器 (支持目录输出)") + print("=" * 60) + + # 获取SUMMARY.md文件路径 + if len(sys.argv) > 1: + summary_path = sys.argv[1] + else: + summary_path = input("请输入SUMMARY.md文件路径: ").strip() + if not summary_path: + summary_path = os.path.join(os.getcwd(), "SUMMARY.md") + print(f"使用默认路径: {summary_path}") + + # 检查文件是否存在 + if not os.path.isfile(summary_path): + print(f"错误: 文件 '{summary_path}' 不存在") + sys.exit(1) + + # 获取基础URL + if len(sys.argv) > 2: + base_url = sys.argv[2] + else: + base_url = input("请输入文档基础URL: ").strip() + if not base_url: + base_url = "https://docs.example.com/" + print(f"使用默认URL: {base_url}") + + # 获取输出文件路径或目录 + if len(sys.argv) > 3: + output_path = sys.argv[3] + else: + default_output = os.path.join(os.path.dirname(summary_path), "gitbook-urls.md") + output_path = input(f"请输入输出文件路径或目录 [默认: {default_output}]: ").strip() + if not output_path: + output_path = default_output + + # 处理文件内容 + processed_content = process_summary_file(summary_path, base_url) + + # 添加标题和说明 + final_content = add_header(processed_content) + + # 保存到Markdown文件 + save_to_markdown(final_content, output_path) + + print("\n处理完成!") \ No newline at end of file diff --git a/scripts/extract-local-file-url.py b/scripts/extract-local-file-url.py new file mode 100644 index 00000000..9fb697a7 --- /dev/null +++ b/scripts/extract-local-file-url.py @@ -0,0 +1,367 @@ +#!/usr/bin/env python3 +""" +本地GitBook Markdown文件链接检查工具 + +此脚本会: +1. 从SUMMARY.md提取所有文档链接 +2. 解析每个本地Markdown文件 +3. 提取并验证文件中的内部链接 +4. 生成链接检查报告 +""" + +import os +import re +import sys +import csv +from datetime import datetime +from urllib.parse import urlparse, urljoin + +# 尝试导入依赖,如果不存在则自动安装 +try: + from bs4 import BeautifulSoup + import markdown +except ImportError: + print("正在安装必要依赖...") + import subprocess + subprocess.check_call([sys.executable, "-m", "pip", "install", "beautifulsoup4", "markdown"]) + from bs4 import BeautifulSoup + import markdown + + +class GitbookLocalChecker: + """GitBook本地文件链接检查工具""" + + def __init__(self, summary_path, base_dir=None, remove_md=True): + """ + 初始化链接检查器 + + Args: + summary_path: SUMMARY.md文件路径 + base_dir: 文档根目录,默认为SUMMARY.md所在目录 + remove_md: 是否移除.md后缀 + """ + self.summary_path = os.path.abspath(summary_path) + self.base_dir = base_dir or os.path.dirname(self.summary_path) + self.remove_md = remove_md + self.all_links = [] + self.all_md_files = [] + self.invalid_links = [] + + # 记录解析过的文件,避免重复处理 + self.processed_files = set() + + def extract_summary_links(self): + """从SUMMARY.md提取所有Markdown文件链接""" + print(f"正在从 {self.summary_path} 提取文档链接...") + + with open(self.summary_path, 'r', encoding='utf-8') as file: + content = file.read() + + # 使用正则表达式提取链接 + link_pattern = r'\[([^\]]+)\]\(([^)]+)\)' + matches = re.findall(link_pattern, content) + + links = [] + for i, (text, link) in enumerate(matches, 1): + # 排除锚点链接 + if not link.startswith('#') and link.endswith('.md'): + # 计算本地文件路径 + local_path = os.path.normpath(os.path.join(self.base_dir, link)) + + links.append({ + 'id': i, + 'text': text, + 'link': link, + 'local_path': local_path, + 'exists': os.path.exists(local_path), + 'type': 'summary_link', + 'source_file': 'SUMMARY.md' + }) + + # 将文件添加到待处理列表 + if os.path.exists(local_path): + self.all_md_files.append(local_path) + + print(f"找到 {len(links)} 个文档链接,{len(self.all_md_files)} 个本地Markdown文件") + self.all_links.extend(links) + return links + + def process_md_file(self, file_path): + """处理单个Markdown文件,提取其中的链接""" + # 如果文件已处理,跳过 + if file_path in self.processed_files: + return [] + + self.processed_files.add(file_path) + relative_path = os.path.relpath(file_path, self.base_dir) + + try: + with open(file_path, 'r', encoding='utf-8') as file: + content = file.read() + + # 提取所有链接 + link_pattern = r'\[([^\]]+)\]\(([^)]+)\)' + matches = re.findall(link_pattern, content) + + links = [] + for text, link in matches: + # 排除外部链接和锚点链接 + if link.startswith(('http://', 'https://', '#')): + continue + + # 解析相对路径 + if link.startswith('/'): + # 从根目录计算 + target_path = os.path.normpath(os.path.join(self.base_dir, link.lstrip('/'))) + else: + # 从当前文件所在目录计算 + target_path = os.path.normpath(os.path.join(os.path.dirname(file_path), link)) + + # 如果链接没有扩展名但指向目录,添加README.md + if not os.path.splitext(target_path)[1]: + if os.path.isdir(target_path): + target_path = os.path.join(target_path, 'README.md') + else: + # 可能是不带扩展名的文件引用,添加.md + target_path += '.md' + + # 检查链接是否有效 + exists = os.path.exists(target_path) + + link_info = { + 'text': text, + 'link': link, + 'local_path': target_path, + 'target_file': os.path.basename(target_path), + 'exists': exists, + 'type': 'internal_link', + 'source_file': relative_path + } + + links.append(link_info) + + # 如果链接无效,添加到无效链接列表 + if not exists: + self.invalid_links.append(link_info) + # 如果是有效的Markdown文件且尚未处理,添加到待处理列表 + elif target_path.endswith('.md') and target_path not in self.processed_files: + self.all_md_files.append(target_path) + + return links + + except Exception as e: + print(f"处理文件 {file_path} 时出错: {e}") + return [] + + def process_all_files(self): + """处理所有Markdown文件""" + print("开始处理所有Markdown文件...") + + # 先提取SUMMARY.md中的链接 + self.extract_summary_links() + + # 处理所有Markdown文件 + files_to_process = list(self.all_md_files) # 创建副本,因为处理过程中会添加新文件 + processed_count = 0 + + for file_path in files_to_process: + if file_path not in self.processed_files: + relative_path = os.path.relpath(file_path, self.base_dir) + print(f"处理文件: {relative_path}") + + links = self.process_md_file(file_path) + self.all_links.extend(links) + + processed_count += 1 + + # 如果发现新文件,可能需要处理它们 + new_files = [f for f in self.all_md_files if f not in files_to_process and f not in self.processed_files] + files_to_process.extend(new_files) + + print(f"已处理 {processed_count} 个Markdown文件") + print(f"共找到 {len(self.all_links)} 个链接,其中 {len(self.invalid_links)} 个无效") + + def generate_markdown_report(self, output_path): + """生成Markdown格式的报告""" + print(f"正在生成报告: {output_path}") + + content = f"""# GitBook本地链接检查报告 + +## 摘要 +- 检查时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')} +- 处理文件数: {len(self.processed_files)} +- 总链接数: {len(self.all_links)} +- 无效链接数: {len(self.invalid_links)} + +## 无效链接列表 +""" + + # 按源文件分组显示无效链接 + grouped_links = {} + for link in self.invalid_links: + source = link['source_file'] + if source not in grouped_links: + grouped_links[source] = [] + grouped_links[source].append(link) + + for source, links in sorted(grouped_links.items()): + content += f"\n### 文件: {source}\n" + for link in links: + content += f"- [{link['text']}]({link['link']}) -> {link['local_path']} (无效)\n" + + # 添加所有文件的链接统计 + content += "\n## 文件链接统计\n" + file_stats = {} + for link in self.all_links: + source = link['source_file'] + if source not in file_stats: + file_stats[source] = {'total': 0, 'invalid': 0} + file_stats[source]['total'] += 1 + if not link['exists']: + file_stats[source]['invalid'] += 1 + + for source, stats in sorted(file_stats.items()): + content += f"- {source}: 共 {stats['total']} 个链接,{stats['invalid']} 个无效\n" + + with open(output_path, 'w', encoding='utf-8') as file: + file.write(content) + + print(f"报告已生成: {output_path}") + + def generate_csv_report(self, output_path): + """生成CSV格式的报告""" + print(f"正在生成CSV报告: {output_path}") + + with open(output_path, 'w', newline='', encoding='utf-8') as csvfile: + fieldnames = ['source_file', 'text', 'link', 'local_path', 'exists', 'type'] + writer = csv.DictWriter(csvfile, fieldnames=fieldnames) + writer.writeheader() + + for link in self.all_links: + writer.writerow({ + 'source_file': link['source_file'], + 'text': link['text'], + 'link': link['link'], + 'local_path': link['local_path'], + 'exists': link['exists'], + 'type': link['type'] + }) + + print(f"CSV报告已生成: {output_path}") + + +def get_input_with_default(prompt, default=None): + """获取用户输入,如果为空则使用默认值""" + if default: + user_input = input(f"{prompt} [{default}]: ") + return user_input if user_input.strip() else default + else: + return input(f"{prompt}: ") + + +def get_yes_no_input(prompt, default="y"): + """获取用户是/否输入""" + valid_responses = { + 'y': True, 'yes': True, '是': True, + 'n': False, 'no': False, '否': False + } + + if default.lower() in ['y', 'yes', '是']: + prompt = f"{prompt} [Y/n]: " + default_value = True + else: + prompt = f"{prompt} [y/N]: " + default_value = False + + user_input = input(prompt).lower() + + if not user_input: + return default_value + + return valid_responses.get(user_input, default_value) + + +def main(): + """主函数,交互式获取输入""" + print("=" * 60) + print("本地GitBook Markdown文件链接检查工具") + print("=" * 60) + + # 获取SUMMARY.md文件路径 + while True: + summary_path = get_input_with_default( + "请输入SUMMARY.md文件路径", + os.path.join(os.getcwd(), "SUMMARY.md") + ) + + # 检查文件是否存在 + if os.path.isfile(summary_path): + break + else: + print(f"错误: 文件 '{summary_path}' 不存在") + + # 获取文档根目录 + default_base_dir = os.path.dirname(os.path.abspath(summary_path)) + base_dir = get_input_with_default( + "请输入文档根目录(包含所有Markdown文件的目录)", + default_base_dir + ) + + # 获取输出目录 + output_dir = get_input_with_default( + "请输入输出目录", + os.path.dirname(summary_path) or os.getcwd() + ) + + # 确保输出目录存在 + os.makedirs(output_dir, exist_ok=True) + + # 生成文件路径 + report_path = os.path.join(output_dir, "gitbook-links-report.md") + csv_path = os.path.join(output_dir, "gitbook-links-report.csv") + + # 询问是否移除.md后缀 + remove_md = get_yes_no_input("是否移除链接中的.md后缀", "y") + + try: + # 创建检查器实例 + checker = GitbookLocalChecker( + summary_path=summary_path, + base_dir=base_dir, + remove_md=remove_md + ) + + # 处理所有文件 + checker.process_all_files() + + # 生成报告 + checker.generate_markdown_report(report_path) + checker.generate_csv_report(csv_path) + + print("\n检查完成!") + print(f"Markdown报告: {report_path}") + print(f"CSV报告: {csv_path}") + + # 显示摘要 + print(f"\n摘要:") + print(f"- 处理文件数: {len(checker.processed_files)}") + print(f"- 总链接数: {len(checker.all_links)}") + print(f"- 无效链接数: {len(checker.invalid_links)}") + + if checker.invalid_links: + print("\n无效链接示例:") + for i, link in enumerate(checker.invalid_links[:5], 1): + print(f"{i}. 文件 '{link['source_file']}' 中 [{link['text']}]({link['link']}) -> {link['local_path']} (无效)") + + if len(checker.invalid_links) > 5: + print(f"... 以及其他 {len(checker.invalid_links) - 5} 个无效链接") + + except Exception as e: + print(f"执行过程中出错: {e}") + import traceback + traceback.print_exc() + sys.exit(1) + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/scripts/md-to-mdx.py b/scripts/md-to-mdx.py index 5632e54f..5e4c9684 100644 --- a/scripts/md-to-mdx.py +++ b/scripts/md-to-mdx.py @@ -3,6 +3,7 @@ import os import re +import shutil from pathlib import Path import logging @@ -18,7 +19,9 @@ logging.basicConfig( logger = logging.getLogger("md-to-mdx") class MarkdownToMDXConverter: - def __init__(self): + def __init__(self, backup=True, in_place=False): + self.backup = backup + self.in_place = in_place self.conversion_count = 0 self.error_count = 0 self.base_output_dir = None @@ -31,90 +34,351 @@ class MarkdownToMDXConverter: logger.error(f"输入目录不存在: {input_dir}") return - if self.base_output_dir is None and output_dir: + # 保存基础输出目录,用于构建子目录输出路径 + if not self.in_place and self.base_output_dir is None and output_dir: self.base_output_dir = Path(output_dir) self.base_input_dir = input_path self.base_output_dir.mkdir(parents=True, exist_ok=True) logger.info(f"创建基础输出目录: {self.base_output_dir}") - for file in input_path.glob("*.md"): - if self.base_output_dir: - rel_path = file.parent.relative_to(self.base_input_dir) if file.parent != self.base_input_dir else Path('') - target_dir = self.base_output_dir / rel_path - target_dir.mkdir(parents=True, exist_ok=True) - self._process_file(file, target_dir) + # 处理当前目录中的所有.md和.mdx文件 + for file in list(input_path.glob("*.md")) + list(input_path.glob("*.mdx")): + if self.in_place: + # 在原位置处理 + self._process_file(file, file.parent, delete_original=True) else: - self._process_file(file, file.parent) + # 计算相对于基础输入目录的路径 + if self.base_output_dir: + rel_path = file.parent.relative_to(self.base_input_dir) if file.parent != self.base_input_dir else Path('') + target_dir = self.base_output_dir / rel_path + target_dir.mkdir(parents=True, exist_ok=True) + self._process_file(file, target_dir) + else: + # 如果没有基础输出目录,则就地处理 + self._process_file(file, file.parent) + # 如果需要递归处理子目录 if recursive: for subdir in [d for d in input_path.iterdir() if d.is_dir()]: + # 跳过output目录,避免重复处理 if subdir.name == "output" or subdir.name.startswith('.'): continue + self.process_directory(subdir, output_dir, recursive) - def _process_file(self, file_path, output_dir): + def _process_file(self, file_path, output_dir, delete_original=False): """处理单个Markdown文件""" try: logger.info(f"处理文件: {file_path}") + # 备份原始文件(如果需要) + if self.backup: + backup_file = str(file_path) + ".bak" + if not os.path.exists(backup_file): + shutil.copy2(file_path, backup_file) + logger.info(f"已创建备份: {backup_file}") + + # 读取文件内容 with open(file_path, 'r', encoding='utf-8') as f: content = f.read() - content = self._fix_broken_text(content) - content = self._convert_images(content) - content = self._convert_hints(content) + # 执行转换 converted_content = self.convert_content(content) + # 确定输出文件路径 output_file = output_dir / (file_path.stem + ".mdx") + # 写入转换后的内容 with open(output_file, 'w', encoding='utf-8') as f: f.write(converted_content) logger.info(f"转换完成: {output_file}") self.conversion_count += 1 + + # 如果需要,删除原始文件 + if delete_original: + try: + os.remove(file_path) + logger.info(f"已删除源文件: {file_path}") + except Exception as e: + logger.error(f"删除源文件 {file_path} 失败: {str(e)}") except Exception as e: logger.error(f"处理文件 {file_path} 时出错: {str(e)}") self.error_count += 1 - def _fix_broken_text(self, content): - """修复文本中的割裂问题,特别是在代码块周围""" - broken_code_pattern = re.compile(r'```([a-zA-Z]*)\r?\n(.*?)\r?\n```([a-zA-Z]*)', re.DOTALL) - content = broken_code_pattern.sub(r'```\1\n\2\n```', content) - return content - - def _convert_images(self, content): - """转换HTML图片格式为Markdown或MDX格式""" - - # 转换没有标题的
结构 - img_pattern_no_caption = re.compile(r'
\s*([^\s*
\s*
', re.DOTALL) - content = img_pattern_no_caption.sub(r'![](\1)', content) - - # 转换带标题的
结构 - img_pattern_with_caption = re.compile(r'
\s*([^\s*

(.*?)

\s*
', re.DOTALL) - def img_replacer(match): - img_src = match.group(1) - alt_text = match.group(3).strip() - return f'![{alt_text}]({img_src})' - content = img_pattern_with_caption.sub(img_replacer, content) - - return content - - def _convert_hints(self, content): - """转换 hint 提示框""" - hint_pattern = re.compile(r'{%\s*hint\s*style="info"\s*%}\s*{%\s*endhint\s*%}', re.DOTALL) - content = hint_pattern.sub(r'\n', content) - return content - def convert_content(self, content): """将Gitbook Markdown内容转换为Mintlify MDX格式""" + + # 1. 转换文档开头的h1元素为frontmatter h1_pattern = re.compile(r'^#\s+(.+?)$', re.MULTILINE) match = h1_pattern.search(content) if match: title = match.group(1).strip() content = h1_pattern.sub(f'---\ntitle: {title}\n---\n', content, count=1) + + # 2. 转换hint提示框 + hint_pattern = re.compile( + r'{%\s*hint\s+style="(\w+)"\s*%}(.*?){%\s*endhint\s*%}', + re.DOTALL + ) + + def hint_replacer(match): + style = match.group(1) + text = match.group(2).strip() + component_name = style.capitalize() if style != "info" else "Info" + return f'<{component_name}>\n{text}\n' + + content = hint_pattern.sub(hint_replacer, content) + + # 3. 转换卡片链接 + card_pattern = re.compile( + r'{%\s*content-ref\s+url="([^"]+)"\s*%}\s*\[([^\]]+)\]\(([^)]+)\)\s*{%\s*endcontent-ref\s*%}', + re.DOTALL + ) + + def card_replacer(match): + url = match.group(1) + title = match.group(2) + return f'\n {title}\n' + + content = card_pattern.sub(card_replacer, content) + + # 4. 转换并排图片样式 + # 寻找连续的图片并转换为并排布局 + img_pattern = re.compile(r'!\[(.*?)\]\((.*?)\)\s*!\[(.*?)\]\((.*?)\)', re.DOTALL) + + def img_side_replacer(match): + alt1 = match.group(1) or "Image 1" + src1 = match.group(2) + alt2 = match.group(3) or "Image 2" + src2 = match.group(4) + + return f'''
+
+ {alt1} +
+
+ {alt2} +
+
''' + + content = img_pattern.sub(img_side_replacer, content) + + # 5. 转换Frame包装的图片 + frame_pattern = re.compile(r'\s*\s*', re.DOTALL) + + def frame_replacer(match): + src = match.group(1) + alt = match.group(2) + return f'![{alt}]({src})' + + content = frame_pattern.sub(frame_replacer, content) + + # 5.1 转换
格式的带有宽度和figcaption的图片为特定格式 + figure_img_width_caption_pattern = re.compile(r'
\s*\s*
(?:

)?(.*?)(?:

)?
\s*
', re.DOTALL) + + def figure_img_width_caption_replacer(match): + src = match.group(1) + alt = match.group(2) or "" + width = match.group(3) + caption = match.group(4).strip() + + # 如果有caption,将其添加到alt中 + if caption: + alt = caption + + return f'''''' + + content = figure_img_width_caption_pattern.sub(figure_img_width_caption_replacer, content) + + # 5.2 转换
格式的带有宽度但没有figcaption的图片 + figure_img_width_pattern = re.compile(r'
\s*\s*
', re.DOTALL) + + def figure_img_width_replacer(match): + src = match.group(1) + alt = match.group(2) or "" + width = match.group(3) + + return f'''''' + + content = figure_img_width_pattern.sub(figure_img_width_replacer, content) + + # 5.3 转换
格式的没有宽度但有figcaption的图片 + figure_img_caption_pattern = re.compile(r'
\s*\s*
(?:

)?(.*?)(?:

)?
\s*
', re.DOTALL) + + def figure_img_caption_replacer(match): + src = match.group(1) + alt = match.group(2) or "" + caption = match.group(3).strip() + + # 如果有caption,将其添加到alt中 + if caption: + alt = caption + + return f'''''' + + content = figure_img_caption_pattern.sub(figure_img_caption_replacer, content) + + # 5.4 处理没有figcaption和宽度的
标签 + figure_img_no_caption_pattern = re.compile(r'
\s*\s*
', re.DOTALL) + + def figure_img_no_caption_replacer(match): + src = match.group(1) + alt = match.group(2) or "" + + return f'''''' + + content = figure_img_no_caption_pattern.sub(figure_img_no_caption_replacer, content) + + # 6. 转换Tabs组件 + # 先匹配整个tabs块 + tabs_pattern = re.compile( + r'{%\s*tabs\s*%}(.*?){%\s*endtabs\s*%}', + re.DOTALL + ) + + def tabs_replacer(match): + tabs_content = match.group(1) + # 匹配每个tab + tab_pattern = re.compile( + r'{%\s*tab\s+title="([^"]+)"\s*%}(.*?){%\s*endtab\s*%}', + re.DOTALL + ) + + # 构建新的Tabs组件 + tabs_start = "" + tabs_items = [] + + for tab_match in tab_pattern.finditer(tabs_content): + title = tab_match.group(1) + content = tab_match.group(2).strip() + tabs_items.append(f' \n {content}\n ') + + tabs_end = "" + + return tabs_start + "\n" + "\n".join(tabs_items) + "\n" + tabs_end + + content = tabs_pattern.sub(tabs_replacer, content) + + # 7. 处理有限制大小的独立img标签 + img_size_pattern = re.compile(r'', re.DOTALL) + + def img_size_replacer(match): + src = match.group(1) + width = match.group(2) + alt = match.group(3) if match.group(3) else "" + + return f'''''' + + content = img_size_pattern.sub(img_size_replacer, content) + + # 7.1 处理各种形式的独立标签 + standalone_img_pattern = re.compile(r']*>', re.DOTALL) + + def standalone_img_replacer(match): + src = match.group(1) + alt = match.group(2) if match.group(2) else "" + + return f'''''' + + content = standalone_img_pattern.sub(standalone_img_replacer, content) + + # 8. 将markdown表格转换为MDX表格格式 + # 使用正则表达式匹配markdown表格 + table_pattern = re.compile(r'(\|.*\|\n\|[-:\s|]*\|\n(?:\|.*\|\n)+)', re.MULTILINE) + + def table_replacer(match): + md_table = match.group(1) + lines = md_table.strip().split('\n') + + # 提取表头和表体 + header_row = lines[0] + header_cells = [cell.strip() for cell in header_row.split('|')[1:-1]] + + # 忽略分隔行 + body_rows = lines[2:] + body_cells_rows = [] + for row in body_rows: + cells = [cell.strip() for cell in row.split('|')[1:-1]] + body_cells_rows.append(cells) + + # 按照要求的格式构建MDX表格 + mdx_table = "\n \n \n" + + # 添加表头 + for cell in header_cells: + mdx_table += f" \n" + + mdx_table += " \n \n \n" + + # 添加表体 + for row_cells in body_cells_rows: + mdx_table += " \n" + for cell in row_cells: + # 先转换Markdown链接为HTML链接 + # 匹配 [text](url) 格式 + link_pattern = re.compile(r'\[([^\]]+)\]\(([^)]+)\)') + cell = link_pattern.sub(r'\1', cell) + + # 替换
标签为

,实现正确的段落分隔 + # 先处理
标签(可能有不同形式:
,
,
) + br_pattern = re.compile(r'') + + # 处理单元格中的


标签 + if '

' in cell or br_pattern.search(cell): + # 如果已有

标签但包含
,替换

+ if '

' in cell and br_pattern.search(cell): + cell = br_pattern.sub(r'

\n

', cell) + # 清理末尾的空
标签 + cell = re.sub(r'(\s*

)', r'\1', cell) + # 如果没有

标签但有
,用

标签包装每个段落 + elif br_pattern.search(cell) and not '

' in cell: + paragraphs = br_pattern.split(cell) + cell = '

' + '

\n

'.join([p.strip() for p in paragraphs if p.strip()]) + '

' + + # 确保缩进正确 + mdx_table += f" \n" + else: + # 普通文本单元格 + mdx_table += f" \n" + mdx_table += " \n" + + mdx_table += " \n
{cell}
\n {cell}\n {cell}
" + + return mdx_table + + content = table_pattern.sub(table_replacer, content) + return content - + def get_statistics(self): """返回处理统计信息""" return { @@ -127,6 +391,7 @@ def main(): print("Gitbook Markdown 转 Mintlify MDX 转换工具") print("=" * 60) + # 通过交互方式获取输入路径 input_path_str = input("请输入源文件或目录路径: ") input_path = Path(input_path_str) @@ -134,34 +399,54 @@ def main(): print(f"错误: 路径 '{input_path_str}' 不存在!") return + # 询问是否递归处理子目录 recursive = False if input_path.is_dir(): recursive_input = input("是否递归处理所有子目录? (y/n): ").lower() recursive = recursive_input in ('y', 'yes') - if input_path.is_file(): - output_dir = input_path.parent / "output" - else: - output_dir = input_path / "output" + # 询问是否创建备份 + backup_input = input("是否创建备份文件? (y/n, 默认:y): ").lower() + create_backup = backup_input in ('', 'y', 'yes') - converter = MarkdownToMDXConverter() + # 询问是否原地转换并删除源文件 + in_place_input = input("是否在原地转换并删除源文件? (y/n, 默认:n): ").lower() + in_place = in_place_input in ('y', 'yes') - if input_path.is_file() and input_path.suffix.lower() == '.md': + # 确定输出目录 + output_dir = None + if not in_place: + if input_path.is_file(): + output_dir = input_path.parent / "output" + else: + output_dir = input_path / "output" output_dir.mkdir(parents=True, exist_ok=True) print(f"输出目录已创建: {output_dir}") - converter._process_file(input_path, output_dir) + + # 创建转换器并处理文件 + converter = MarkdownToMDXConverter(backup=create_backup, in_place=in_place) + + if input_path.is_file() and input_path.suffix.lower() == '.md': + # 处理单个文件 + if in_place: + converter._process_file(input_path, input_path.parent, delete_original=True) + else: + converter._process_file(input_path, output_dir) elif input_path.is_dir(): + # 处理目录 converter.process_directory(input_path, output_dir, recursive) else: logger.error(f"无效的输入路径: {input_path_str}") print(f"错误: '{input_path_str}' 不是有效的Markdown文件或目录!") return + # 打印统计信息 stats = converter.get_statistics() print("=" * 60) print(f"转换完成! 成功转换: {stats['conversion_count']}个文件, 错误: {stats['error_count']}个文件") - print(f"转换结果已保存至: {output_dir}") + if not in_place and output_dir: + print(f"转换结果已保存至: {output_dir}") print("=" * 60) if __name__ == "__main__": - main() + main() \ No newline at end of file diff --git a/zh-hans/community/contribution.mdx b/zh-hans/community/contribution.mdx index 721cd4b3..9b1b0d9a 100644 --- a/zh-hans/community/contribution.mdx +++ b/zh-hans/community/contribution.mdx @@ -102,7 +102,7 @@ Dify 依赖以下工具和库: Dify 由后端和前端组成。通过 `cd api/` 导航到后端目录,然后按照 [后端 README](https://github.com/langgenius/dify/blob/main/api/README.md) 进行安装。在另一个终端中,通过 `cd web/` 导航到前端目录,然后按照 [前端 README](https://github.com/langgenius/dify/blob/main/web/README.md) 进行安装。 -查看 [安装常见问题解答](https://docs.dify.ai/v/zh-hans/learn-more/faq/install-faq) 以获取常见问题列表和故障排除步骤。 +查看 [安装常见问题解答](/zh-hans/learn-more/faq/install-faq) 以获取常见问题列表和故障排除步骤。 ### 5. 在浏览器中访问 Dify @@ -110,11 +110,11 @@ Dify 由后端和前端组成。通过 `cd api/` 导航到后端目录,然后 ## 开发 -如果你要添加模型提供程序,请参考 [此指南](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/README.md)。 +如果你要添加模型提供程序,请参考 [模型开发](https://docs.dify.ai/plugins/quick-start/develop-plugins/model-plugin)。 -如果你要向 Agent 或 Workflow 添加工具提供程序,请参考 [此指南](https://github.com/langgenius/dify/blob/main/api/core/tools/README_CN.md)。 +如果你要向 Agent 或 Workflow 添加工具提供程序,请参考 [工具开发](https://docs.dify.ai/plugins/quick-start/develop-plugins/tool-plugin)。 -> **注意**:如果你想要贡献新的工具,请确保已在工具的 `YAML` 文件内留下了你的联系方式,并且在 [Dify-docs](https://github.com/langgenius/dify-docs/tree/main/en/guides/tools/tool-configuration) 帮助文档代码仓库中提交了对应的文档 PR。 +> **注意**:如果你想要贡献新的工具,请确保已在工具的 `YAML` 文件内留下了你的联系方式,并且在 [Dify-docs](https://github.com/langgenius/dify-docs) 帮助文档代码仓库中提交了对应的文档 PR。 为了帮助你快速了解你的贡献在哪个部分,以下是 Dify 后端和前端的简要注释大纲: diff --git a/zh-hans/getting-started/cloud.mdx b/zh-hans/getting-started/cloud.mdx index 6f4c8154..a26e5a88 100644 --- a/zh-hans/getting-started/cloud.mdx +++ b/zh-hans/getting-started/cloud.mdx @@ -8,9 +8,9 @@ title: 云服务 Dify 为所有人提供了[云服务](http://cloud.dify.ai),你无需自己部署即可使用 Dify 的完整功能。要使用 Dify 云服务,你需要有一个 GitHub 或 Google 账号。 -1. 登录 [Dify 云服务](https://cloud.dify.ai),创建一个或加入已有的 Workspace -2. 配置你的模型供应商,或使用我们提供的托管模型供应商 -3. 可以[创建应用](../guides/application-orchestrate/creating-an-application.md)了! +1. 登录 [Dify 云服务](https://cloud.dify.ai),创建一个或加入已有的 Workspace。 +2. 配置你的模型供应商,或使用我们提供的托管模型供应商。 +3. [创建应用](../guides/application-orchestrate/creating-an-application)。 ### 订阅计划 @@ -21,4 +21,4 @@ Dify 为所有人提供了[云服务](http://cloud.dify.ai),你无需自己部 * 团队版 * 企业版 -点击[此处](https://dify.ai/pricing)查看各版本定价请参考。 \ No newline at end of file +点击[此处](https://dify.ai/pricing)查看各版本定价请参考。 diff --git a/zh-hans/getting-started/dify-premium.mdx b/zh-hans/getting-started/dify-premium.mdx index cf52323e..ce5a06f3 100644 --- a/zh-hans/getting-started/dify-premium.mdx +++ b/zh-hans/getting-started/dify-premium.mdx @@ -5,7 +5,7 @@ title: Dify Premium Dify Premium 是一款 [AWS AMI](https://docs.aws.amazon.com/zh\_cn/AWSEC2/latest/UserGuide/ec2-instances-and-amis.html) 产品,允许自定义品牌,并可作为 EC2 一键部署到你的 AWS VPC 上。前往 [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) 进行订阅并使用,它适合以下场景: * 在中小型企业内,需在服务器上创建一个或多应用程序,并且关心数据私有化。 -* 你对 [Dify Cloud](https://docs.dify.ai/v/zh-hans/getting-started/cloud)订阅计划感兴趣,但所需的用例资源超出了[计划](https://dify.ai/pricing)内所提供的资源。 +* 你对 [Dify Cloud](/zh-hans/getting-started/cloud)订阅计划感兴趣,但所需的用例资源超出了[计划](https://dify.ai/pricing)内所提供的资源。 * 你希望在组织内采用 Dify Enterprise 之前进行 POC 验证。 ## 设置 diff --git a/zh-hans/getting-started/install-self-hosted/environments.mdx b/zh-hans/getting-started/install-self-hosted/environments.mdx index ec76de13..999eb561 100644 --- a/zh-hans/getting-started/install-self-hosted/environments.mdx +++ b/zh-hans/getting-started/install-self-hosted/environments.mdx @@ -199,7 +199,7 @@ Flask 调试模式,开启可在接口输出 trace 信息,方便调试。 WebAPP CORS 跨域策略,默认为 `*`,即所有域名均可访问。 -详细配置可参考:[跨域 / 身份相关指南](https://docs.dify.ai/v/zh-hans/learn-more/faq/install-faq#id-3.-an-zhuang-shi-hou-wu-fa-deng-lu-deng-lu-cheng-gong-dan-hou-xu-jie-kou-jun-ti-shi-401) +详细配置可参考:[跨域 / 身份相关指南](/zh-hans/learn-more/faq/install-faq#id-3.-an-zhuang-shi-hou-wu-fa-deng-lu-deng-lu-cheng-gong-dan-hou-xu-jie-kou-jun-ti-shi-401) #### 文件存储配置 diff --git a/zh-hans/getting-started/install-self-hosted/faq.mdx b/zh-hans/getting-started/install-self-hosted/faq.mdx index 4af41f65..597838b1 100644 --- a/zh-hans/getting-started/install-self-hosted/faq.mdx +++ b/zh-hans/getting-started/install-self-hosted/faq.mdx @@ -5,7 +5,7 @@ title: 常见问题 ### 1. 长时间未收到密码重置邮件应如何处理? -你需要在 `.env` 文件内配置 `Mail` 参数项,详细说明请参考 [《环境变量说明:邮件相关配置》](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments#you-jian-xiang-guan-pei-zhi)。 +你需要在 `.env` 文件内配置 `Mail` 参数项,详细说明请参考 [《环境变量说明:邮件相关配置》](/zh-hans/getting-started/install-self-hosted/environments#you-jian-xiang-guan-pei-zhi)。 修改配置后,运行以下命令重启服务。 diff --git a/zh-hans/getting-started/install-self-hosted/local-source-code.mdx b/zh-hans/getting-started/install-self-hosted/local-source-code.mdx index b3234b33..6c2cf2e6 100644 --- a/zh-hans/getting-started/install-self-hosted/local-source-code.mdx +++ b/zh-hans/getting-started/install-self-hosted/local-source-code.mdx @@ -38,7 +38,7 @@ title: 本地源码启动 -> 若需要使用 OpenAI TTS,需要在系统中安装 FFmpeg 才可正常使用,详情可参考:[Link](https://docs.dify.ai/v/zh-hans/learn-more/faq/install-faq#id-15.-wen-ben-zhuan-yu-yin-yu-dao-zhe-ge-cuo-wu-zen-me-ban)。 +> 若需要使用 OpenAI TTS,需要在系统中安装 FFmpeg 才可正常使用,详情可参考:[Link](/zh-hans/learn-more/faq/install-faq#id-15.-wen-ben-zhuan-yu-yin-yu-dao-zhe-ge-cuo-wu-zen-me-ban)。 Clone Dify 代码: diff --git a/zh-hans/getting-started/install-self-hosted/readme.mdx b/zh-hans/getting-started/install-self-hosted/readme.mdx index e15c20fa..32e84832 100644 --- a/zh-hans/getting-started/install-self-hosted/readme.mdx +++ b/zh-hans/getting-started/install-self-hosted/readme.mdx @@ -2,11 +2,10 @@ title: 部署社区版 --- - Dify 社区版即开源版本,你可以通过以下两种方式之一部署 Dify 社区版: -* [Docker Compose 部署](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) -* [本地源码启动](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/local-source-code) +* [Docker Compose 部署](/zh-hans/getting-started/install-self-hosted/docker-compose) +* [本地源码启动](/zh-hans/getting-started/install-self-hosted/local-source-code) 在 GitHub 上查看 [Dify 社区版](https://github.com/langgenius/dify)。 diff --git a/zh-hans/getting-started/readme/model-providers.mdx b/zh-hans/getting-started/readme/model-providers.mdx index d73d06e7..06bb6091 100644 --- a/zh-hans/getting-started/readme/model-providers.mdx +++ b/zh-hans/getting-started/readme/model-providers.mdx @@ -382,4 +382,4 @@ Dify 为以下模型提供商提供原生支持: 其中 (🛠️) 代表支持 Function Calling,(👓) 代表视觉能力。 -这张表格我们会一直保持更新。同时,我们也留意着社区成员们所提出的关于模型供应商的各种[请求](https://github.com/langgenius/dify/discussions/categories/ideas)。如果你有需要的模型供应商却没在上面找到,不妨动手参与进来,通过提交一个PR(Pull Request)来做出你的贡献。欢迎查阅我们的 [contribution.md](../../community/contribution.md "mention")指南了解更多。 +这张表格我们会一直保持更新。同时,我们也留意着社区成员们所提出的关于模型供应商的各种[请求](https://github.com/langgenius/dify/discussions/categories/ideas)。如果你有需要的模型供应商却没在上面找到,不妨动手参与进来,通过提交一个PR(Pull Request)来做出你的贡献。欢迎查阅我们的[贡献指南](../../community/contribution)指南了解更多。 diff --git a/zh-hans/guides/application-orchestrate/agent.md b/zh-hans/guides/application-orchestrate/agent.md deleted file mode 100644 index aeb89cae..00000000 --- a/zh-hans/guides/application-orchestrate/agent.md +++ /dev/null @@ -1,69 +0,0 @@ -# Agent - -### 定义 - -智能助手(Agent Assistant),利用大语言模型的推理能力,能够自主对复杂的人类任务进行目标规划、任务拆解、工具调用、过程迭代,并在没有人类干预的情况下完成任务。 - -### 如何使用智能助手 - -为了方便快速上手使用,你可以在“探索”中找到智能助手的应用模板,添加到自己的工作区,或者在此基础上进行自定义。在全新的 Dify 工作室中,你也可以从零编排一个专属于你自己的智能助手,帮助你完成财务报表分析、撰写报告、Logo 设计、旅程规划等任务。 - -![探索-智能助手应用模板](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/5d28172c2852848223e91215cdf4ac53.png) - -选择智能助手的推理模型,智能助手的任务完成能力取决于模型推理能力,我们建议在使用智能助手时选择推理能力更强的模型系列如 gpt-4 以获得更稳定的任务完成效果。 - -![选择智能助手的推理模型](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/c27751cbc6250569087d0b15ca2e69c2.png) - -你可以在“提示词”中编写智能助手的指令,为了能够达到更优的预期效果,你可以在指令中明确它的任务目标、工作流程、资源和限制等。 - -![编排智能助手的指令提示词](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/34fa9ee865c612fb10aa51befd2ea396.png) - -### 添加助手需要的工具 - -在“上下文”中,你可以添加智能助手可以用于查询的知识库工具,这将帮助它获取外部背景知识。 - -在“工具”中,你可以添加需要使用的工具。工具可以扩展 LLM 的能力,比如联网搜索、科学计算或绘制图片,赋予并增强了 LLM 连接外部世界的能力。Dify 提供了两种工具类型:**第一方工具**和**自定义工具**。 - -你可以直接使用 Dify 生态提供的第一方内置工具,或者轻松导入自定义的 API 工具(目前支持 OpenAPI / Swagger 和 OpenAI Plugin 规范)。 - -![添加助手需要的工具](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/cf59361ae13c2aa2a0762bc0064c6a17.png) - -“工具”功能允许用户借助外部能力,在 Dify 上创建出更加强大的 AI 应用。例如你可以为智能助理型应用(Agent)编排合适的工具,它可以通过任务推理、步骤拆解、调用工具完成复杂任务。 - -另外工具也可以方便将你的应用与其他系统或服务连接,与外部环境交互。例如代码执行、对专属信息源的访问等。你只需要在对话框中谈及需要调用的某个工具的名字,即可自动调用该工具。 - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/7adb47ad645fb7f1bd95848e78cb6a0f.png) - -### 配置 Agent - -在 Dify 上为智能助手提供了 Function calling(函数调用)和 ReAct 两种推理模式。已支持 Function Call 的模型系列如 gpt-3.5/gpt-4 拥有效果更佳、更稳定的表现,尚未支持 Function calling 的模型系列,我们支持了 ReAct 推理框架实现类似的效果。 - -在 Agent 配置中,你可以修改助手的迭代次数限制。 - -![Function Calling 模式](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/276c17da01c12a7549f0b382503c0557.png) - -![ReAct 模式](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/e1e3806eb438cb52d4c4b6940b8021f5.png) - -### 配置对话开场白 - -你可以为智能助手配置一套会话开场白和开场问题,配置的对话开场白将在每次用户初次对话中展示助手可以完成什么样的任务,以及可以提出的问题示例。 - -![配置会话开场白和开场问题](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/fc25255198c756ac939ec913fe36d7f9.png) - -### 添加文件上传 - -部分多模态 LLM 已原生支持处理文件,例如 [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support) 或 [Gemini 1.5 Pro](https://ai.google.dev/api/files)。你可以在 LLM 的官方网站了解文件上传能力的支持情况。 - -选择具备读取文件的 LLM,开启 “文档” 功能。无需复杂配置即可让当前 Chatbot 具备文件识别能力。 - -![](https://assets-docs.dify.ai/2024/11/9f0b7a3c67b58c0bd7926501284cbb7d.png) - -### 调试与预览 - -编排完智能助手之后,你可以在发布成应用之前进行调试与预览,查看助手的任务完成效果。 - -![调试与预览](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/cd4a7ffded1a86d5e4aed1b9df36dc64.png) - -### 应用发布 - -![应用发布为 Webapp](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/44b15a78c45e21afafd110864f78f33e.png) diff --git a/zh-hans/guides/application-orchestrate/app-toolkits/moderation-tool.md b/zh-hans/guides/application-orchestrate/app-toolkits/moderation-tool.md deleted file mode 100644 index aea6d840..00000000 --- a/zh-hans/guides/application-orchestrate/app-toolkits/moderation-tool.md +++ /dev/null @@ -1,27 +0,0 @@ -# 敏感内容审查 - -我们在与 AI 应用交互的过程中,往往在内容安全性,用户体验,法律法规等方面有较为苛刻的要求,此时我们需要“敏感词审查”功能,来为终端用户创造一个更好的交互环境。 在提示词编排页面,点击“添加功能”,找到底部的工具箱“内容审核”: - -![Content moderation](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/app-toolkits/09c3d5be9b7194e54d0333242c501719.png) - -### 功能一:调用 OpenAI Moderation API - -OpenAI 和大多数 LLM 公司提供的模型,都带有内容审查功能,确保不会输出包含有争议的内容,比如暴力,性和非法行为,并且 OpenAI 还开放了这种内容审查能力,具体可以参考 [platform.openai.com](https://platform.openai.com/docs/guides/moderation/overview) 。现在你也可以直接在 Dify 上调用 OpenAI Moderation API,你可以审核输入内容或输出内容,只要输入对应的“预设回复”即可。 - -![OpenAI Moderation API](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/app-toolkits/6b09f91a05c993e0aa6bb56eca71e607.png) - -### 功能二:自定义关键词 - -开发者可以自定义需要审查的敏感词,比如把“kill”作为关键词,在用户输入的时候作审核动作,要求预设回复内容为“The content is violating usage policies.”可以预见的结果是当用户在终端输入包含“kill”的语料片段,就会触发敏感词审查工具,返回预设回复内容。 - -![Keywords](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/app-toolkits/644c2b024f59497aed3cd8ac984c96e3.png) - -### 功能三: 敏感词审查 Moderation 扩展 - -不同的企业内部往往有着不同的敏感词审查机制,企业在开发自己的 AI 应用如企业内部知识库 ChatBot,需要对员工输入的查询内容作敏感词审查。为此,开发者可以根据自己企业内部的敏感词审查机制写一个 API 扩展,具体可参考 [moderation.md](../../extension/api-based-extension/moderation.md "mention"),从而在 Dify 上调用,实现敏感词审查的高度自定义和隐私保护。 - -![Moderation Settings](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/app-toolkits/d8b6dff6fce6d70795b87aefc56eb02b.png) - -比如我们在自己的本地服务中自定义敏感词审查规则:不能查询有关美国总统的名字的问题。当用户在`query`变量输入"Trump",则在对话时会返回 "Your content violates our usage policy." 测试效果如下: - -![Moderation Test](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/app-toolkits/970c894a68f017def62c0f7253b0f44e.png) diff --git a/zh-hans/guides/application-orchestrate/chatbot-application.md b/zh-hans/guides/application-orchestrate/chatbot-application.md deleted file mode 100644 index 7f3cb853..00000000 --- a/zh-hans/guides/application-orchestrate/chatbot-application.md +++ /dev/null @@ -1,79 +0,0 @@ -# 聊天助手 - -对话型应用采用一问一答模式与用户持续对话。 - -### 适用场景 - -对话型应用可以用在客户服务、在线教育、医疗保健、金融服务等领域。这些应用可以帮助组织提高工作效率、减少人工成本和提供更好的用户体验。 - -### 如何编排 - -对话型应用的编排支持:对话前提示词,变量,上下文,开场白和下一步问题建议。 - -下面边以做一个 **面试官** 的应用为例来介绍编排对话型应用。 - -#### 创建应用 - -在首页点击 “创建应用” 按钮创建应用。填上应用名称,应用类型选择**聊天助手**。 - -![](https://assets-docs.dify.ai/2024/12/572b246b74431dd550c5b61d9215dbaa.png) - -#### 编排应用 - -创建应用后会自动跳转到应用概览页。点击左侧菜单 **编排** 来编排应用。 - -![应用编排](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/f19f991c1295c2ed3230ef27fc47818e.png) - -**填写提示词** - -提示词用于约束 AI 给出专业的回复,让回应更加精确。你可以借助内置的提示生成器,编写合适的提示词。提示词内支持插入表单变量,例如 `{{input}}`。提示词中的变量的值会替换成用户填写的值。 - -示例: - -1. 输入提示指令,要求给出一段面试场景的提示词。 -2. 右侧内容框将自动生成提示词。 -3. 你可以在提示词内插入自定义变量。 - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/9dbd67eacb701500c64e61978d35b812.png) - -为了更好的用户体验,可以加上对话开场白:`你好,{{name}}。我是你的面试官,Bob。你准备好了吗?`。点击页面底部的 “添加功能” 按钮,打开 “对话开场白” 的功能: - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/9c81c98c4b58f894ed92e06a4b9bcf87.png) - -编辑开场白时,还可以添加数个开场问题: - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/c74a502465c8f68b0c65d40abb4c71d4.png) - -#### 添加上下文 - -如果想要让 AI 的对话范围局限在[知识库](../knowledge-base/)内,例如企业内的客服话术规范,可以在“上下文”内引用知识库。 - -![](<../../.gitbook/assets/image (108) (1).png>) - -#### 添加文件上传 - -部分多模态 LLM 已原生支持处理文件,例如 [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support) 或 [Gemini 1.5 Pro](https://ai.google.dev/api/files)。你可以在 LLM 的官方网站了解文件上传能力的支持情况。 - -选择具备读取文件的 LLM,开启 “文档” 功能。无需复杂配置即可让当前 Chatbot 具备文件识别能力。 - -![](https://assets-docs.dify.ai/2024/11/823399d85e8ced5068dc9da4f693170e.png) - -#### 调试 - -在右侧填写用户输入项,输入内容进行调试。 - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/21640904df527d7487b4ac646738971b.png) - -若 LLM 给出的回答结果不理想,你可以调整提示词或切换不同底层模型进行效果对比。如需更进一步,同时查看不同模型对于同一个问题的回答情况,请参考[多模型调试](./multiple-llms-debugging.md)。 - -#### 发布应用 - -调试好应用后,点击右上角的 **“发布”** 按钮生成独立的 AI 应用。除了通过公开 URL 体验该应用,你也进行基于 APIs 的二次开发、嵌入至网站内等操作。详情请参考[发布](https://docs.dify.ai/v/zh-hans/guides/application-publishing)。 - -如果想定制已发布的应用,可以 Fork 我们的开源的 [WebApp 的模版](https://github.com/langgenius/webapp-conversation)。基于模版改成符合你的情景与风格需求的应用。 - -### 常见问题 - -**如何在聊天助手内添加第三方工具?** - -聊天助手类型应用不支持添加第三方工具,你可以在 [Agent 类型](https://docs.dify.ai/v/zh-hans/guides/application-orchestrate/agent)应用内添加第三方工具。 diff --git a/zh-hans/guides/application-orchestrate/chatbot-application.mdx b/zh-hans/guides/application-orchestrate/chatbot-application.mdx index b1c285df..71cbbd50 100644 --- a/zh-hans/guides/application-orchestrate/chatbot-application.mdx +++ b/zh-hans/guides/application-orchestrate/chatbot-application.mdx @@ -71,7 +71,7 @@ title: 聊天助手 #### 发布应用 -调试好应用后,点击右上角的 **“发布”** 按钮生成独立的 AI 应用。除了通过公开 URL 体验该应用,你也进行基于 APIs 的二次开发、嵌入至网站内等操作。详情请参考[发布](https://docs.dify.ai/v/zh-hans/guides/application-publishing)。 +调试好应用后,点击右上角的 **“发布”** 按钮生成独立的 AI 应用。除了通过公开 URL 体验该应用,你也进行基于 APIs 的二次开发、嵌入至网站内等操作。详情请参考[发布](/zh-hans/guides/application-publishing)。 如果想定制已发布的应用,可以 Fork 我们的开源的 [WebApp 的模版](https://github.com/langgenius/webapp-conversation)。基于模版改成符合你的情景与风格需求的应用。 @@ -79,4 +79,4 @@ title: 聊天助手 **如何在聊天助手内添加第三方工具?** -聊天助手类型应用不支持添加第三方工具,你可以在 [Agent 类型](https://docs.dify.ai/v/zh-hans/guides/application-orchestrate/agent)应用内添加第三方工具。 +聊天助手类型应用不支持添加第三方工具,你可以在 [Agent 类型](/zh-hans/guides/application-orchestrate/agent)应用内添加第三方工具。 diff --git a/zh-hans/guides/application-orchestrate/creating-an-application.md b/zh-hans/guides/application-orchestrate/creating-an-application.md deleted file mode 100644 index 0598cd81..00000000 --- a/zh-hans/guides/application-orchestrate/creating-an-application.md +++ /dev/null @@ -1,55 +0,0 @@ -# 创建应用 - -你可以通过 3 种方式在 Dify 的工作室内创建应用: - -* 基于应用模板创建(新手推荐) -* 创建一个空白应用 -* 通过 DSL 文件(本地/在线)创建应用 - -### 从模板创建应用 - -初次使用 Dify 时,你可能对于应用创建比较陌生。为了帮助新手用户快速了解在 Dify 上能够构建哪些类型的应用,Dify 团队内的提示词工程师已经创建好了多场景、高质量的应用模板。 - -你可以从导航选择 「工作室 」,在应用列表内选择 「从模版创建」。 - -![从模板创建应用](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/6405fc6c267146c6be0cb779a7838504.png) - -任意选择某个模板,并将其添加至工作区。 - -### 创建一个新应用 - -如果你需要在 Dify 上创建一个空白应用,你可以从导航选择 「工作室」 ,在应用列表内选择 「从空白创建 」。 - -![](https://assets-docs.dify.ai/2024/12/bfee6805544a811553c5fe8d28227694.png) - -Dify 上可以创建 4 种不同的应用类型,分别是聊天助手、文本生成应用、Agent 和工作流。 - -创建应用时,你需要给应用起一个名字、选择合适的图标,或者上传喜爱的图片用作图标、使用一段清晰的文字描述此应用的用途,以便后续应用在团队内的使用。 - -{% embed url="https://www.motionshot.app/walkthrough/6765339bcf1efee248025520/embed?fullscreen=1&hideCopy=1&hideDownload=1&hideSteps=1" %} - -![](https://assets-docs.dify.ai/2024/12/1429eb56e0082c281f7aaeb48e72cb0f.png) - -### 通过 DSL 文件创建应用 - -{% hint style="info" %} -Dify DSL 是由 Dify.AI 所定义的 AI 应用工程文件标准,文件格式为 YML。该标准涵盖应用在 Dify 内的基本描述、模型参数、编排配置等信息。 -{% endhint %} - -#### 本地导入 - -如果你从社区或其它人那里获得了一个应用模版(DSL 文件),可以从工作室选择 「 导入DSL 文件 」。DSL 文件导入后将直接加载原应用的所有配置信息。 - -![导入 DSL 文件创建应用](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/6615ef40b8c0027563a5a4ca0c315ff1.png) - -#### URL 导入 - -你也可以通过 URL 导入 DSL 文件,参考的链接格式: - -```url -https://example.com/your_dsl.yml -``` - -![通过 URL 导入 DSL 文件](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/63d45c8f397dd65258dd4b322f0a43fa.jpeg) - -> 导入 DSL 文件时将校对文件版本号。如果 DSL 版本号差异较大,有可能会出现兼容性问题。详细说明请参考 [应用管理:导入](https://docs.dify.ai/zh-hans/guides/management/app-management#dao-ru-ying-yong)。 diff --git a/zh-hans/guides/application-orchestrate/creating-an-application.mdx b/zh-hans/guides/application-orchestrate/creating-an-application.mdx index ce895b89..18cd73ac 100644 --- a/zh-hans/guides/application-orchestrate/creating-an-application.mdx +++ b/zh-hans/guides/application-orchestrate/creating-an-application.mdx @@ -2,7 +2,6 @@ title: 创建应用 --- - 你可以通过 3 种方式在 Dify 的工作室内创建应用: * 基于应用模板创建(新手推荐) @@ -29,7 +28,13 @@ Dify 上可以创建 4 种不同的应用类型,分别是聊天助手、文本 创建应用时,你需要给应用起一个名字、选择合适的图标,或者上传喜爱的图片用作图标、使用一段清晰的文字描述此应用的用途,以便后续应用在团队内的使用。 -{% embed url="https://www.motionshot.app/walkthrough/6765339bcf1efee248025520/embed?fullscreen=1&hideCopy=1&hideDownload=1&hideSteps=1" %} + ![](https://assets-docs.dify.ai/2024/12/1429eb56e0082c281f7aaeb48e72cb0f.png) diff --git a/zh-hans/guides/application-orchestrate/multiple-llms-debugging.md b/zh-hans/guides/application-orchestrate/multiple-llms-debugging.md deleted file mode 100644 index 0bfcec44..00000000 --- a/zh-hans/guides/application-orchestrate/multiple-llms-debugging.md +++ /dev/null @@ -1,24 +0,0 @@ -# 多模型调试 - -聊天助手应用类型支持 **“多个模型进行调试”** 功能,你可以同时批量检视不同模型对于相同问题的回答效果。 - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/d0de3f1edca55af986a6b213876ed3ff.png) - -最多支持同时添加 4 个大模型。 - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/application-orchestrate/74003eeacf3f4f8f0d7cb6c3ad20c84a.png) - -调试时如果发现效果较好的模型,点击 **“单一模型进行调试”** 即可进入至当前模型的单独调试页面。 - -![](https://assets-docs.dify.ai/2025/02/d273dee2ec4c04f7208a4f7a8b3a86db.png) - -## 常见问题 - -### 1. 添加大模型时没有看到其它模型应该如何处理? - -前往 [“增加新供应商”](https://docs.dify.ai/v/zh-hans/guides/model-configuration/new-provider),按照页面提示手动添加多个模型的 Key。 - -### 2. 如何退出多模型调试模式? - -选择任意模型,点击 **“单一模型进行调试”** 选项即可退出多模型调试模式。 - diff --git a/zh-hans/guides/application-orchestrate/multiple-llms-debugging.mdx b/zh-hans/guides/application-orchestrate/multiple-llms-debugging.mdx index 6b45a415..26c0b14d 100644 --- a/zh-hans/guides/application-orchestrate/multiple-llms-debugging.mdx +++ b/zh-hans/guides/application-orchestrate/multiple-llms-debugging.mdx @@ -19,7 +19,7 @@ title: 多模型调试 ### 1. 添加大模型时没有看到其它模型应该如何处理? -前往 [“增加新供应商”](https://docs.dify.ai/v/zh-hans/guides/model-configuration/new-provider),按照页面提示手动添加多个模型的 Key。 +前往 [“增加新供应商”](/zh-hans/guides/model-configuration/new-provider),按照页面提示手动添加多个模型的 Key。 ### 2. 如何退出多模型调试模式? diff --git a/zh-hans/guides/application-orchestrate/readme.md b/zh-hans/guides/application-orchestrate/readme.md deleted file mode 100644 index 08e345b5..00000000 --- a/zh-hans/guides/application-orchestrate/readme.md +++ /dev/null @@ -1,26 +0,0 @@ -# 构建应用 - -在 Dify 中,一个“应用”是指基于 GPT 等大语言模型构建的实际场景应用。通过创建应用,你可以将智能 AI 技术应用于特定的需求。它既包含了开发 AI 应用的工程范式,也包含了具体的交付物。 - -简而言之,一个应用为开发者交付了: - -* 封装友好的 API,可由后端或前端应用直接调用,通过 Token 鉴权 -* 开箱即用、美观且托管的 WebApp,你可以 WebApp 的模版进行二次开发 -* 一套包含提示词工程、上下文管理、日志分析和标注的易用界面 - -你可以任选**其中之一**或**全部**,来支撑你的 AI 应用开发。 - -### 应用类型 - -Dify 中提供了五种应用类型: - -* **聊天助手**:基于 LLM 构建对话式交互的助手 -* **文本生成应用**:面向文本生成类任务的助手,例如撰写故事、文本分类、翻译等 -* **Agent**:能够分解任务、推理思考、调用工具的对话式智能助手 -* **对话流**:适用于定义等复杂流程的多轮对话场景,具有记忆功能的应用编排方式 -* **工作流**:适用于自动化、批处理等单轮生成类任务的场景的应用编排方式 - -文本生成应用与聊天助手的区别见下表: - -
文本生成应用聊天助手
WebApp 界面表单+结果式聊天式
WebAPI 端点completion-messageschat-messages
交互方式一问一答多轮对话
流式结果返回支持支持
上下文保存当次持续
用户输入表单支持支持
知识库与插件支持支持
AI 开场白不支持支持
情景举例翻译、判断、索引聊天
- diff --git a/zh-hans/guides/application-orchestrate/readme.mdx b/zh-hans/guides/application-orchestrate/readme.mdx index d0d3beeb..733d1168 100644 --- a/zh-hans/guides/application-orchestrate/readme.mdx +++ b/zh-hans/guides/application-orchestrate/readme.mdx @@ -1,5 +1,5 @@ --- -title: 构建应用 +title: 应用类型简介 --- diff --git a/zh-hans/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx b/zh-hans/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx index 9b5d3bf6..67c9e6ef 100644 --- a/zh-hans/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx +++ b/zh-hans/guides/knowledge-base/create-knowledge-and-upload-documents/readme.mdx @@ -34,7 +34,7 @@ title: 知识库创建步骤 在 RAG 的生产级应用中,为了获得更好的数据召回效果,需要对多源数据进行预处理和清洗,即 ETL (_extract, transform, load_)。为了增强非结构化/半结构化数据的预处理能力,Dify 支持了可选的 ETL 方案:**Dify ETL** 和[ ](https://docs.unstructured.io/welcome)[**Unstructured ETL** ](https://unstructured.io/)。Unstructured 能够高效地提取并转换你的数据为干净的数据用于后续的步骤。Dify 各版本的 ETL 方案选择: * SaaS 版不可选,默认使用 Unstructured ETL; -* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; +* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; 文件解析支持格式的差异: diff --git a/zh-hans/guides/knowledge-base/knowledge-base-creation/introduction.mdx b/zh-hans/guides/knowledge-base/knowledge-base-creation/introduction.mdx index 1da90b2c..576eb7ef 100644 --- a/zh-hans/guides/knowledge-base/knowledge-base-creation/introduction.mdx +++ b/zh-hans/guides/knowledge-base/knowledge-base-creation/introduction.mdx @@ -36,7 +36,7 @@ title: 创建步骤 在 RAG 的生产级应用中,为了获得更好的数据召回效果,需要对多源数据进行预处理和清洗,即 ETL (_extract, transform, load_)。为了增强非结构化/半结构化数据的预处理能力,Dify 支持了可选的 ETL 方案:**Dify ETL** 和[ ](https://docs.unstructured.io/welcome)[**Unstructured ETL** ](https://unstructured.io/)。Unstructured 能够高效地提取并转换你的数据为干净的数据用于后续的步骤。Dify 各版本的 ETL 方案选择: * SaaS 版不可选,默认使用 Unstructured ETL; -* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; +* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; 文件解析支持格式的差异: diff --git a/zh-hans/guides/model-configuration/README.md b/zh-hans/guides/model-configuration/README.md deleted file mode 100644 index 4789fb14..00000000 --- a/zh-hans/guides/model-configuration/README.md +++ /dev/null @@ -1,78 +0,0 @@ -# 模型 - -Dify 是基于大语言模型的 AI 应用开发平台,初次使用时你需要先在 Dify 的 **设置 -- 模型供应商** 页面内添加并配置所需要的模型。 - -![设置-模型供应商](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/e83d92f43b9cd8eca719b3c27ca63418.png) - -Dify 目前已支持主流的模型供应商,例如 OpenAI 的 GPT 系列、Anthropic 的 Claude 系列等。不同模型的能力表现、参数类型会不一样,你可以根据不同情景的应用需求选择你喜欢的模型供应商。**你在 Dify 应用以下模型能力前,应该前往不同的模型厂商官方网站获得他们的 API key 。** - -### 模型类型 - -在 Dify 中,我们按模型的使用场景将模型分为以下 4 类: - -1. **系统推理模型**。 在创建的应用中,用的是该类型的模型。智聊、对话名称生成、下一步问题建议用的也是推理模型。 - - > 已支持的系统推理模型供应商:[OpenAI](https://platform.openai.com/account/api-keys)、[Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service/)、[Anthropic](https://console.anthropic.com/account/keys)、Hugging Face Hub、Replicate、Xinference、OpenLLM、[讯飞星火](https://www.xfyun.cn/solutions/xinghuoAPI)、[文心一言](https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application)、[通义千问](https://dashscope.console.aliyun.com/api-key\_management?spm=a2c4g.11186623.0.0.3bbc424dxZms9k)、[Minimax](https://api.minimax.chat/user-center/basic-information/interface-key)、ZHIPU(ChatGLM) -2. **Embedding 模型**。在知识库中,将分段过的文档做 Embedding 用的是该类型的模型。在使用了知识库的应用中,将用户的提问做 Embedding 处理也是用的该类型的模型。 - - > 已支持的 Embedding 模型供应商:OpenAI、ZHIPU(ChatGLM)、Jina AI([Jina Embeddings](https://jina.ai/embeddings/)) -3. [**Rerank 模型**](https://docs.dify.ai/v/zh-hans/advanced/retrieval-augment/rerank)。**Rerank 模型用于增强检索能力,改善 LLM 的搜索结果。** - - > 已支持的 Rerank 模型供应商:Cohere、Jina AI([Jina Reranker](https://jina.ai/reranker)) -4. **语音转文字模型**。将对话型应用中,将语音转文字用的是该类型的模型。 - - > 已支持的语音转文字模型供应商:OpenAI - -根据技术变化和用户需求,我们将陆续支持更多 LLM 供应商。 - -### 托管模型试用服务 - -我们为 Dify 云服务的用户提供了不同模型的试用额度,请在该额度耗尽前设置你自己的模型供应商,否则将会影响应用的正常使用。 - -* **OpenAI 托管模型试用:** 我们提供 200 次调用次数供你试用体验,可用于 GPT3.5-turbo、GPT3.5-turbo-16k、text-davinci-003 模型。 - -### 设置默认模型 - -Dify 在需要模型时,会根据使用场景来选择设置过的默认模型。在 `设置 > 模型供应商` 中设置默认模型。 - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/c5ac5f32deb020a8aae46045d3ee9c8d.png) - -系统默认推理模型(System Reasoning Model):设置创建应用使用的默认推理模型,以及对话名称生成、下一步问题建议等功能也会使用该默认推理模型。 - -### 接入模型设置 - -在 Dify 的 `设置 > 模型供应商` 中设置要接入的模型。 - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/a8d65b27c806b0a6fabe88099023d116.png) - -模型供应商分为两种: - -1. 自有模型。该类型的模型供应商提供的是自己开发的模型。如 OpenAI,Anthropic 等。 -2. 托管模型。该类型的模型供应商提供的是第三方模型。如 Hugging Face,Replicate 等。 - -在 Dify 中接入不同类型的模型供应商的方式稍有不同。 - -**接入自有模型的模型供应商** - -接入自有模型的供应商后,Dify 会自动接入该供应商下的所有模型。 - -在 Dify 中设置对应模型供应商的 API key,即可接入该模型供应商。 - -{% hint style="info" %} -Dify 使用了 [PKCS1\_OAEP](https://pycryptodome.readthedocs.io/en/latest/src/cipher/oaep.html) 来加密存储用户托管的 API 密钥,每个租户均使用了独立的密钥对进行加密,确保你的 API 密钥不被泄漏。 -{% endhint %} - -**接入托管模型的模型供应商** - -托管类型的供应商上面有很多第三方模型。接入模型需要一个个的添加。具体接入方式如下: - -* [Hugging Face](../../development/models-integration/hugging-face.md) -* [Replicate](../../development/models-integration/replicate.md) -* [Xinference](../../development/models-integration/xinference.md) -* [OpenLLM](../../development/models-integration/openllm.md) - -### 使用模型 - -配置完模型后,就可以在应用中使用这些模型了: - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/055e92f8634c0ba1bdd2c559f7d8997d.png) diff --git a/zh-hans/guides/model-configuration/interfaces.md b/zh-hans/guides/model-configuration/interfaces.md deleted file mode 100644 index f59325bd..00000000 --- a/zh-hans/guides/model-configuration/interfaces.md +++ /dev/null @@ -1,746 +0,0 @@ -# 接口方法 - -这里介绍供应商和各模型类型需要实现的接口方法和参数说明。 - -## 供应商 - -继承 `__base.model_provider.ModelProvider` 基类,实现以下接口: - -```python -def validate_provider_credentials(self, credentials: dict) -> None: - """ - Validate provider credentials - You can choose any validate_credentials method of model type or implement validate method by yourself, - such as: get model list api - - if validate failed, raise exception - - :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. - """ -``` - -- `credentials` (object) 凭据信息 - - 凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 定义,传入如:`api_key` 等。 - -验证失败请抛出 `errors.validate.CredentialsValidateFailedError` 错误。 - -**注:预定义模型需完整实现该接口,自定义模型供应商只需要如下简单实现即可** - -```python -class XinferenceProvider(Provider): - def validate_provider_credentials(self, credentials: dict) -> None: - pass -``` - -## 模型 - -模型分为 5 种不同的模型类型,不同模型类型继承的基类不同,需要实现的方法也不同。 - -### 通用接口 - -所有模型均需要统一实现下面 2 个方法: - -- 模型凭据校验 - - 与供应商凭据校验类似,这里针对单个模型进行校验。 - - ```python - def validate_credentials(self, model: str, credentials: dict) -> None: - """ - Validate model credentials - - :param model: model name - :param credentials: model credentials - :return: - """ - ``` - - 参数: - - - `model` (string) 模型名称 - - - `credentials` (object) 凭据信息 - - 凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。 - - 验证失败请抛出 `errors.validate.CredentialsValidateFailedError` 错误。 - -- 调用异常错误映射表 - - 当模型调用异常时需要映射到 Runtime 指定的 `InvokeError` 类型,方便 Dify 针对不同错误做不同后续处理。 - - Runtime Errors: - - - `InvokeConnectionError` 调用连接错误 - - `InvokeServerUnavailableError ` 调用服务方不可用 - - `InvokeRateLimitError ` 调用达到限额 - - `InvokeAuthorizationError` 调用鉴权失败 - - `InvokeBadRequestError ` 调用传参有误 - - ```python - @property - def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: - """ - Map model invoke error to unified error - The key is the error type thrown to the caller - The value is the error type thrown by the model, - which needs to be converted into a unified error type for the caller. - - :return: Invoke error mapping - """ - ``` - - 也可以直接抛出对应 Errors,并做如下定义,这样在之后的调用中可以直接抛出`InvokeConnectionError`等异常。 - - ```python - @property - def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: - return { - InvokeConnectionError: [ - InvokeConnectionError - ], - InvokeServerUnavailableError: [ - InvokeServerUnavailableError - ], - InvokeRateLimitError: [ - InvokeRateLimitError - ], - InvokeAuthorizationError: [ - InvokeAuthorizationError - ], - InvokeBadRequestError: [ - InvokeBadRequestError - ], - } - ``` - -​ 可参考 OpenAI `_invoke_error_mapping`。 - -### LLM - -继承 `__base.large_language_model.LargeLanguageModel` 基类,实现以下接口: - -- LLM 调用 - - 实现 LLM 调用的核心方法,可同时支持流式和同步返回。 - - ```python - def _invoke(self, model: str, credentials: dict, - prompt_messages: list[PromptMessage], model_parameters: dict, - tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None, - stream: bool = True, user: Optional[str] = None) \ - -> Union[LLMResult, Generator]: - """ - Invoke large language model - - :param model: model name - :param credentials: model credentials - :param prompt_messages: prompt messages - :param model_parameters: model parameters - :param tools: tools for tool calling - :param stop: stop words - :param stream: is stream response - :param user: unique user id - :return: full response or stream response chunk generator result - """ - ``` - - - 参数: - - - `model` (string) 模型名称 - - - `credentials` (object) 凭据信息 - - 凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。 - - - `prompt_messages` (array[[PromptMessage](#PromptMessage)]) Prompt 列表 - - 若模型为 `Completion` 类型,则列表只需要传入一个 [UserPromptMessage](#UserPromptMessage) 元素即可; - - 若模型为 `Chat` 类型,需要根据消息不同传入 [SystemPromptMessage](#SystemPromptMessage), [UserPromptMessage](#UserPromptMessage), [AssistantPromptMessage](#AssistantPromptMessage), [ToolPromptMessage](#ToolPromptMessage) 元素列表 - - - `model_parameters` (object) 模型参数 - - 模型参数由模型 YAML 配置的 `parameter_rules` 定义。 - - - `tools` (array[[PromptMessageTool](#PromptMessageTool)]) [optional] 工具列表,等同于 `function calling` 中的 `function`。 - - 即传入 tool calling 的工具列表。 - - - `stop` (array[string]) [optional] 停止序列 - - 模型返回将在停止序列定义的字符串之前停止输出。 - - - `stream` (bool) 是否流式输出,默认 True - - 流式输出返回 Generator[[LLMResultChunk](#LLMResultChunk)],非流式输出返回 [LLMResult](#LLMResult)。 - - - `user` (string) [optional] 用户的唯一标识符 - - 可以帮助供应商监控和检测滥用行为。 - - - 返回 - - 流式输出返回 Generator[[LLMResultChunk](#LLMResultChunk)],非流式输出返回 [LLMResult](#LLMResult)。 - -- 预计算输入 tokens - - 若模型未提供预计算 tokens 接口,可直接返回 0。 - - ```python - def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], - tools: Optional[list[PromptMessageTool]] = None) -> int: - """ - Get number of tokens for given prompt messages - - :param model: model name - :param credentials: model credentials - :param prompt_messages: prompt messages - :param tools: tools for tool calling - :return: - """ - ``` - - 参数说明见上述 `LLM 调用`。 - - 该接口需要根据对应`model`选择合适的`tokenizer`进行计算,如果对应模型没有提供`tokenizer`,可以使用`AIModel`基类中的`_get_num_tokens_by_gpt2(text: str)`方法进行计算。 - -- 获取自定义模型规则 [可选] - - ```python - def get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]: - """ - Get customizable model schema - - :param model: model name - :param credentials: model credentials - :return: model schema - """ - ``` - -​当供应商支持增加自定义 LLM 时,可实现此方法让自定义模型可获取模型规则,默认返回 None。 - -对于`OpenAI`供应商下的大部分微调模型,可以通过其微调模型名称获取到其基类模型,如`gpt-3.5-turbo-1106`,然后返回基类模型的预定义参数规则,参考[openai](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py) -的具体实现 - -### TextEmbedding - -继承 `__base.text_embedding_model.TextEmbeddingModel` 基类,实现以下接口: - -- Embedding 调用 - - ```python - def _invoke(self, model: str, credentials: dict, - texts: list[str], user: Optional[str] = None) \ - -> TextEmbeddingResult: - """ - Invoke large language model - - :param model: model name - :param credentials: model credentials - :param texts: texts to embed - :param user: unique user id - :return: embeddings result - """ - ``` - - - 参数: - - - `model` (string) 模型名称 - - - `credentials` (object) 凭据信息 - - 凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。 - - - `texts` (array[string]) 文本列表,可批量处理 - - - `user` (string) [optional] 用户的唯一标识符 - - 可以帮助供应商监控和检测滥用行为。 - - - 返回: - - [TextEmbeddingResult](#TextEmbeddingResult) 实体。 - -- 预计算 tokens - - ```python - def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int: - """ - Get number of tokens for given prompt messages - - :param model: model name - :param credentials: model credentials - :param texts: texts to embed - :return: - """ - ``` - - 参数说明见上述 `Embedding 调用`。 - - 同上述`LargeLanguageModel`,该接口需要根据对应`model`选择合适的`tokenizer`进行计算,如果对应模型没有提供`tokenizer`,可以使用`AIModel`基类中的`_get_num_tokens_by_gpt2(text: str)`方法进行计算。 - -### Rerank - -继承 `__base.rerank_model.RerankModel` 基类,实现以下接口: - -- rerank 调用 - - ```python - def _invoke(self, model: str, credentials: dict, - query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, - user: Optional[str] = None) \ - -> RerankResult: - """ - Invoke rerank model - - :param model: model name - :param credentials: model credentials - :param query: search query - :param docs: docs for reranking - :param score_threshold: score threshold - :param top_n: top n - :param user: unique user id - :return: rerank result - """ - ``` - - - 参数: - - - `model` (string) 模型名称 - - - `credentials` (object) 凭据信息 - - 凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。 - - - `query` (string) 查询请求内容 - - - `docs` (array[string]) 需要重排的分段列表 - - - `score_threshold` (float) [optional] Score 阈值 - - - `top_n` (int) [optional] 取前 n 个分段 - - - `user` (string) [optional] 用户的唯一标识符 - - 可以帮助供应商监控和检测滥用行为。 - - - 返回: - - [RerankResult](#RerankResult) 实体。 - -### Speech2text - -继承 `__base.speech2text_model.Speech2TextModel` 基类,实现以下接口: - -- Invoke 调用 - - ```python - def _invoke(self, model: str, credentials: dict, - file: IO[bytes], user: Optional[str] = None) \ - -> str: - """ - Invoke large language model - - :param model: model name - :param credentials: model credentials - :param file: audio file - :param user: unique user id - :return: text for given audio file - """ - ``` - - - 参数: - - - `model` (string) 模型名称 - - - `credentials` (object) 凭据信息 - - 凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。 - - - `file` (File) 文件流 - - - `user` (string) [optional] 用户的唯一标识符 - - 可以帮助供应商监控和检测滥用行为。 - - - 返回: - - 语音转换后的字符串。 - -### Text2speech - -继承 `__base.text2speech_model.Text2SpeechModel` 基类,实现以下接口: - -- Invoke 调用 - - ```python - def _invoke(self, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None): - """ - Invoke large language model - - :param model: model name - :param credentials: model credentials - :param content_text: text content to be translated - :param streaming: output is streaming - :param user: unique user id - :return: translated audio file - """ - ``` - - - 参数: - - - `model` (string) 模型名称 - - - `credentials` (object) 凭据信息 - - 凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。 - - - `content_text` (string) 需要转换的文本内容 - - - `streaming` (bool) 是否进行流式输出 - - - `user` (string) [optional] 用户的唯一标识符 - - 可以帮助供应商监控和检测滥用行为。 - - - 返回: - - 文本转换后的语音流。 - -### Moderation - -继承 `__base.moderation_model.ModerationModel` 基类,实现以下接口: - -- Invoke 调用 - - ```python - def _invoke(self, model: str, credentials: dict, - text: str, user: Optional[str] = None) \ - -> bool: - """ - Invoke large language model - - :param model: model name - :param credentials: model credentials - :param text: text to moderate - :param user: unique user id - :return: false if text is safe, true otherwise - """ - ``` - - - 参数: - - - `model` (string) 模型名称 - - - `credentials` (object) 凭据信息 - - 凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。 - - - `text` (string) 文本内容 - - - `user` (string) [optional] 用户的唯一标识符 - - 可以帮助供应商监控和检测滥用行为。 - - - 返回: - - False 代表传入的文本安全,True 则反之。 - - - -## 实体 - -### PromptMessageRole - -消息角色 - -```python -class PromptMessageRole(Enum): - """ - Enum class for prompt message. - """ - SYSTEM = "system" - USER = "user" - ASSISTANT = "assistant" - TOOL = "tool" -``` - -### PromptMessageContentType - -消息内容类型,分为纯文本和图片。 - -```python -class PromptMessageContentType(Enum): - """ - Enum class for prompt message content type. - """ - TEXT = 'text' - IMAGE = 'image' -``` - -### PromptMessageContent - -消息内容基类,仅作为参数声明用,不可初始化。 - -```python -class PromptMessageContent(BaseModel): - """ - Model class for prompt message content. - """ - type: PromptMessageContentType - data: str # 内容数据 -``` - -当前支持文本和图片两种类型,可支持同时传入文本和多图。 - -需要分别初始化 `TextPromptMessageContent` 和 `ImagePromptMessageContent` 传入。 - -### TextPromptMessageContent - -```python -class TextPromptMessageContent(PromptMessageContent): - """ - Model class for text prompt message content. - """ - type: PromptMessageContentType = PromptMessageContentType.TEXT -``` - -若传入图文,其中文字需要构造此实体作为 `content` 列表中的一部分。 - -### ImagePromptMessageContent - -```python -class ImagePromptMessageContent(PromptMessageContent): - """ - Model class for image prompt message content. - """ - class DETAIL(Enum): - LOW = 'low' - HIGH = 'high' - - type: PromptMessageContentType = PromptMessageContentType.IMAGE - detail: DETAIL = DETAIL.LOW # 分辨率 -``` - -若传入图文,其中图片需要构造此实体作为 `content` 列表中的一部分 - -`data` 可以为 `url` 或者图片 `base64` 加密后的字符串。 - -### PromptMessage - -所有 Role 消息体的基类,仅作为参数声明用,不可初始化。 - -```python -class PromptMessage(ABC, BaseModel): - """ - Model class for prompt message. - """ - role: PromptMessageRole # 消息角色 - content: Optional[str | list[PromptMessageContent]] = None # 支持两种类型,字符串和内容列表,内容列表是为了满足多模态的需要,可详见 PromptMessageContent 说明。 - name: Optional[str] = None # 名称,可选。 -``` - -### UserPromptMessage - -UserMessage 消息体,代表用户消息。 - -```python -class UserPromptMessage(PromptMessage): - """ - Model class for user prompt message. - """ - role: PromptMessageRole = PromptMessageRole.USER -``` - -### AssistantPromptMessage - -代表模型返回消息,通常用于 `few-shots` 或聊天历史传入。 - -```python -class AssistantPromptMessage(PromptMessage): - """ - Model class for assistant prompt message. - """ - class ToolCall(BaseModel): - """ - Model class for assistant prompt message tool call. - """ - class ToolCallFunction(BaseModel): - """ - Model class for assistant prompt message tool call function. - """ - name: str # 工具名称 - arguments: str # 工具参数 - - id: str # 工具 ID,仅在 OpenAI tool call 生效,为工具调用的唯一 ID,同一个工具可以调用多次 - type: str # 默认 function - function: ToolCallFunction # 工具调用信息 - - role: PromptMessageRole = PromptMessageRole.ASSISTANT - tool_calls: list[ToolCall] = [] # 模型回复的工具调用结果(仅当传入 tools,并且模型认为需要调用工具时返回) -``` - -其中 `tool_calls` 为调用模型传入 `tools` 后,由模型返回的 `tool call` 列表。 - -### SystemPromptMessage - -代表系统消息,通常用于设定给模型的系统指令。 - -```python -class SystemPromptMessage(PromptMessage): - """ - Model class for system prompt message. - """ - role: PromptMessageRole = PromptMessageRole.SYSTEM -``` - -### ToolPromptMessage - -代表工具消息,用于工具执行后将结果交给模型进行下一步计划。 - -```python -class ToolPromptMessage(PromptMessage): - """ - Model class for tool prompt message. - """ - role: PromptMessageRole = PromptMessageRole.TOOL - tool_call_id: str # 工具调用 ID,若不支持 OpenAI tool call,也可传入工具名称 -``` - -基类的 `content` 传入工具执行结果。 - -### PromptMessageTool - -```python -class PromptMessageTool(BaseModel): - """ - Model class for prompt message tool. - """ - name: str # 工具名称 - description: str # 工具描述 - parameters: dict # 工具参数 dict -``` - ---- - -### LLMResult - -```python -class LLMResult(BaseModel): - """ - Model class for llm result. - """ - model: str # 实际使用模型 - prompt_messages: list[PromptMessage] # prompt 消息列表 - message: AssistantPromptMessage # 回复消息 - usage: LLMUsage # 使用的 tokens 及费用信息 - system_fingerprint: Optional[str] = None # 请求指纹,可参考 OpenAI 该参数定义 -``` - -### LLMResultChunkDelta - -流式返回中每个迭代内部 `delta` 实体 - -```python -class LLMResultChunkDelta(BaseModel): - """ - Model class for llm result chunk delta. - """ - index: int # 序号 - message: AssistantPromptMessage # 回复消息 - usage: Optional[LLMUsage] = None # 使用的 tokens 及费用信息,仅最后一条返回 - finish_reason: Optional[str] = None # 结束原因,仅最后一条返回 -``` - -### LLMResultChunk - -流式返回中每个迭代实体 - -```python -class LLMResultChunk(BaseModel): - """ - Model class for llm result chunk. - """ - model: str # 实际使用模型 - prompt_messages: list[PromptMessage] # prompt 消息列表 - system_fingerprint: Optional[str] = None # 请求指纹,可参考 OpenAI 该参数定义 - delta: LLMResultChunkDelta # 每个迭代存在变化的内容 -``` - -### LLMUsage - -```python -class LLMUsage(ModelUsage): - """ - Model class for llm usage. - """ - prompt_tokens: int # prompt 使用 tokens - prompt_unit_price: Decimal # prompt 单价 - prompt_price_unit: Decimal # prompt 价格单位,即单价基于多少 tokens - prompt_price: Decimal # prompt 费用 - completion_tokens: int # 回复使用 tokens - completion_unit_price: Decimal # 回复单价 - completion_price_unit: Decimal # 回复价格单位,即单价基于多少 tokens - completion_price: Decimal # 回复费用 - total_tokens: int # 总使用 token 数 - total_price: Decimal # 总费用 - currency: str # 货币单位 - latency: float # 请求耗时(s) -``` - ---- - -### TextEmbeddingResult - -```python -class TextEmbeddingResult(BaseModel): - """ - Model class for text embedding result. - """ - model: str # 实际使用模型 - embeddings: list[list[float]] # embedding 向量列表,对应传入的 texts 列表 - usage: EmbeddingUsage # 使用信息 -``` - -### EmbeddingUsage - -```python -class EmbeddingUsage(ModelUsage): - """ - Model class for embedding usage. - """ - tokens: int # 使用 token 数 - total_tokens: int # 总使用 token 数 - unit_price: Decimal # 单价 - price_unit: Decimal # 价格单位,即单价基于多少 tokens - total_price: Decimal # 总费用 - currency: str # 货币单位 - latency: float # 请求耗时(s) -``` - ---- - -### RerankResult - -```python -class RerankResult(BaseModel): - """ - Model class for rerank result. - """ - model: str # 实际使用模型 - docs: list[RerankDocument] # 重排后的分段列表 -``` - -### RerankDocument - -```python -class RerankDocument(BaseModel): - """ - Model class for rerank document. - """ - index: int # 原序号 - text: str # 分段文本内容 - score: float # 分数 -``` diff --git a/zh-hans/guides/model-configuration/load-balancing.md b/zh-hans/guides/model-configuration/load-balancing.md deleted file mode 100644 index f067c7a9..00000000 --- a/zh-hans/guides/model-configuration/load-balancing.md +++ /dev/null @@ -1,35 +0,0 @@ -# 负载均衡 - -模型速率限制(Rate limits)是模型厂商对用户或客户在指定时间内访问 API 服务次数所添加的限制。它有助于防止 API 的滥用或误用,有助于确保每个用户都能公平地访问 API,控制基础设施的总体负载。 - -在企业级大规模调用模型 API 时,高并发请求会导致超过请求速率限制并影响用户访问。负载均衡可以通过在多个 API 端点之间分配 API 请求,确保所有用户都能获得最快的响应和最高的模型调用吞吐量,保障业务稳定运行。 - -你可以在 **模型供应商 -- 模型列表 -- 设置模型负载均衡** 打开该功能,并在同一个模型上添加多个凭据 (API key)。 - -![模型负载均衡](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/c2781fddfb5c7e76023ac59c926d9e37.png) - -{% hint style="info" %} -模型负载均衡为付费特性,你可以通过[订阅 SaaS 付费服务](../../getting-started/cloud.md#ding-yue-ji-hua)或者购买企业版来开启该功能。 -{% endhint %} - -默认配置中的 API Key 为初次配置模型供应商时添加的凭据,你需要点击 **增加配置** 添加同一模型的不同 API Key 来正常使用负载均衡功能。 - -![配置负载均衡](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/cbf2149eb7fcf613599f50ff58381889.png) - -**需要额外添加至少 1 个模型凭据**即可保存并开启负载均衡。 - -你也可以将已配置的凭据**临时停用**或者**删除**。 - -![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/8012d5be22981efe0e59b81f32a961fe.png) - -配置完成后再模型列表内会显示所有已开启负载均衡的模型。 - -![开启负载均衡](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/f69088f1f5a176aa0819a68950ac595c.png) - -{% hint style="info" %} -默认情况下,负载均衡使用 Round-robin 策略。如果触发速率限制,将应用 1 分钟的冷却时间。 -{% endhint %} - -你也可以从 **添加模型** 配置负载均衡,配置流程与上面一致。 - -![从添加模型配置负载均衡](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/model-configuration/12970502b2e202d1f890dcecadf2dcbd.png) diff --git a/zh-hans/guides/model-configuration/new-provider.md b/zh-hans/guides/model-configuration/new-provider.md deleted file mode 100644 index 94e551bc..00000000 --- a/zh-hans/guides/model-configuration/new-provider.md +++ /dev/null @@ -1,192 +0,0 @@ -# 增加新供应商 - -### 供应商配置方式 - -供应商支持三种模型配置方式: - -**预定义模型(predefined-model)** - -表示用户只需要配置统一的供应商凭据即可使用供应商下的预定义模型。 - -**自定义模型(customizable-model)** - -用户需要新增每个模型的凭据配置,如 Xinference,它同时支持 LLM 和 Text Embedding,但是每个模型都有唯一的 **model\_uid**,如果想要将两者同时接入,就需要为每个模型配置一个 **model\_uid**。 - -**从远程获取(fetch-from-remote)** - -与 `predefined-model`配置方式一致,只需要配置统一的供应商凭据即可,模型通过凭据信息从供应商获取。 - -如OpenAI,我们可以基于 gpt-turbo-3.5 来 Fine Tune 多个模型,而他们都位于同一个 **api\_key** 下,当配置为`fetch-from-remote`时,开发者只需要配置统一的 **api\_key** 即可让 Dify Runtime 获取到开发者所有的微调模型并接入 Dify。 - -这三种配置方式**支持共存**,即存在供应商支持`predefined-model` + `customizable-model` 或 `predefined-model` + `fetch-from-remote`等,也就是配置了供应商统一凭据可以使用预定义模型和从远程获取的模型,若新增了模型,则可以在此基础上额外使用自定义的模型。 - -### 配置说明 - -**名词解释** - -* `module`: 一个`module`即为一个 Python Package,或者通俗一点,称为一个文件夹,里面包含了一个`__init__.py`文件,以及其他的`.py`文件。 - -**步骤** - -新增一个供应商主要分为几步,这里简单列出,帮助大家有一个大概的认识,具体的步骤会在下面详细介绍。 - -* 创建供应商 yaml 文件,根据 [Provider Schema](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md) 编写。 -* 创建供应商代码,实现一个`class`。 -* 根据模型类型,在供应商`module`下创建对应的模型类型 `module`,如`llm`或`text_embedding`。 -* 根据模型类型,在对应的模型`module`下创建同名的代码文件,如`llm.py`,并实现一个`class`。 -* 如果有预定义模型,根据模型名称创建同名的yaml文件在模型`module`下,如`claude-2.1.yaml`,根据 [AI Model Entity](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md#aimodelentity) 编写。 -* 编写测试代码,确保功能可用。 - -#### 开始吧 - -增加一个新的供应商需要先确定供应商的英文标识,如 `anthropic`,使用该标识在 `model_providers` 创建以此为名称的 `module`。 - -在此 `module` 下,我们需要先准备供应商的 YAML 配置。 - -**准备供应商 YAML** - -此处以 `Anthropic` 为例,预设了供应商基础信息、支持的模型类型、配置方式、凭据规则。 - -```YAML -provider: anthropic # 供应商标识 -label: # 供应商展示名称,可设置 en_US 英文、zh_Hans 中文两种语言,zh_Hans 不设置将默认使用 en_US。 - en_US: Anthropic -icon_small: # 供应商小图标,存储在对应供应商实现目录下的 _assets 目录,中英文策略同 label - en_US: icon_s_en.png -icon_large: # 供应商大图标,存储在对应供应商实现目录下的 _assets 目录,中英文策略同 label - en_US: icon_l_en.png -supported_model_types: # 支持的模型类型,Anthropic 仅支持 LLM -- llm -configurate_methods: # 支持的配置方式,Anthropic 仅支持预定义模型 -- predefined-model -provider_credential_schema: # 供应商凭据规则,由于 Anthropic 仅支持预定义模型,则需要定义统一供应商凭据规则 - credential_form_schemas: # 凭据表单项列表 - - variable: anthropic_api_key # 凭据参数变量名 - label: # 展示名称 - en_US: API Key - type: secret-input # 表单类型,此处 secret-input 代表加密信息输入框,编辑时只展示屏蔽后的信息。 - required: true # 是否必填 - placeholder: # PlaceHolder 信息 - zh_Hans: 在此输入你的 API Key - en_US: Enter your API Key - - variable: anthropic_api_url - label: - en_US: API URL - type: text-input # 表单类型,此处 text-input 代表文本输入框 - required: false - placeholder: - zh_Hans: 在此输入你的 API URL - en_US: Enter your API URL -``` - -如果接入的供应商提供自定义模型,比如`OpenAI`提供微调模型,那么我们就需要添加[`model_credential_schema`](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md),以`OpenAI`为例: - -```yaml -model_credential_schema: - model: # 微调模型名称 - label: - en_US: Model Name - zh_Hans: 模型名称 - placeholder: - en_US: Enter your model name - zh_Hans: 输入模型名称 - credential_form_schemas: - - variable: openai_api_key - label: - en_US: API Key - type: secret-input - required: true - placeholder: - zh_Hans: 在此输入你的 API Key - en_US: Enter your API Key - - variable: openai_organization - label: - zh_Hans: 组织 ID - en_US: Organization - type: text-input - required: false - placeholder: - zh_Hans: 在此输入你的组织 ID - en_US: Enter your Organization ID - - variable: openai_api_base - label: - zh_Hans: API Base - en_US: API Base - type: text-input - required: false - placeholder: - zh_Hans: 在此输入你的 API Base - en_US: Enter your API Base -``` - -也可以参考`model_providers`目录下其他供应商目录下的 [YAML 配置信息](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md)。 - -**实现供应商代码** - -我们需要在`model_providers`下创建一个同名的python文件,如`anthropic.py`,并实现一个`class`,继承`__base.provider.Provider`基类,如`AnthropicProvider`。 - -**自定义模型供应商** - -当供应商为 Xinference 等自定义模型供应商时,可跳过该步骤,仅创建一个空的`XinferenceProvider`类即可,并实现一个空的`validate_provider_credentials`方法,该方法并不会被实际使用,仅用作避免抽象类无法实例化。 - -```python -class XinferenceProvider(Provider): - def validate_provider_credentials(self, credentials: dict) -> None: - pass -``` - -**预定义模型供应商** - -供应商需要继承 `__base.model_provider.ModelProvider` 基类,实现 `validate_provider_credentials` 供应商统一凭据校验方法即可,可参考 [AnthropicProvider](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/model\_providers/anthropic/anthropic.py)。 - -```python -def validate_provider_credentials(self, credentials: dict) -> None: - """ - Validate provider credentials - You can choose any validate_credentials method of model type or implement validate method by yourself, - such as: get model list api - - if validate failed, raise exception - - :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. - """ -``` - -当然也可以先预留 `validate_provider_credentials` 实现,在模型凭据校验方法实现后直接复用。 - -**增加模型** - -[**增加预定义模型** ](https://docs.dify.ai/v/zh-hans/guides/model-configuration/predefined-model)**👈🏻** - -对于预定义模型,我们可以通过简单定义一个 yaml,并通过实现调用代码来接入。 - -[**增加自定义模型**](https://docs.dify.ai/v/zh-hans/guides/model-configuration/customizable-model) **👈🏻** - -对于自定义模型,我们只需要实现调用代码即可接入,但是它需要处理的参数可能会更加复杂。 - -*** - -#### 测试 - -为了保证接入供应商/模型的可用性,编写后的每个方法均需要在 `tests` 目录中编写对应的集成测试代码。 - -依旧以 `Anthropic` 为例。 - -在编写测试代码前,需要先在 `.env.example` 新增测试供应商所需要的凭据环境变量,如:`ANTHROPIC_API_KEY`。 - -在执行前需要将 `.env.example` 复制为 `.env` 再执行。 - -**编写测试代码** - -在 `tests` 目录下创建供应商同名的 `module`: `anthropic`,继续在此模块中创建 `test_provider.py` 以及对应模型类型的 test py 文件,如下所示: - -```shell -. -├── __init__.py -├── anthropic -│   ├── __init__.py -│   ├── test_llm.py # LLM 测试 -│   └── test_provider.py # 供应商测试 -``` - -针对上面实现的代码的各种情况进行测试代码编写,并测试通过后提交代码。 diff --git a/zh-hans/guides/model-configuration/new-provider.mdx b/zh-hans/guides/model-configuration/new-provider.mdx index 74e8c095..64633c29 100644 --- a/zh-hans/guides/model-configuration/new-provider.mdx +++ b/zh-hans/guides/model-configuration/new-provider.mdx @@ -159,11 +159,11 @@ def validate_provider_credentials(self, credentials: dict) -> None: **增加模型** -[**增加预定义模型** ](https://docs.dify.ai/v/zh-hans/guides/model-configuration/predefined-model)**👈🏻** +[**增加预定义模型** ](/zh-hans/guides/model-configuration/predefined-model)**👈🏻** 对于预定义模型,我们可以通过简单定义一个 yaml,并通过实现调用代码来接入。 -[**增加自定义模型**](https://docs.dify.ai/v/zh-hans/guides/model-configuration/customizable-model) **👈🏻** +[**增加自定义模型**](/zh-hans/guides/model-configuration/customizable-model) **👈🏻** 对于自定义模型,我们只需要实现调用代码即可接入,但是它需要处理的参数可能会更加复杂。 diff --git a/zh-hans/guides/model-configuration/predefined-model.md b/zh-hans/guides/model-configuration/predefined-model.md deleted file mode 100644 index 00400eb9..00000000 --- a/zh-hans/guides/model-configuration/predefined-model.md +++ /dev/null @@ -1,197 +0,0 @@ -# 预定义模型接入 - -供应商集成完成后,接下来为供应商下模型的接入。 - -我们首先需要确定接入模型的类型,并在对应供应商的目录下创建对应模型类型的 `module`。 - -当前支持模型类型如下: - -* `llm` 文本生成模型 -* `text_embedding` 文本 Embedding 模型 -* `rerank` Rerank 模型 -* `speech2text` 语音转文字 -* `tts` 文字转语音 -* `moderation` 审查 - -依旧以 `Anthropic` 为例,`Anthropic` 仅支持 LLM,因此在 `model_providers.anthropic` 创建一个 `llm` 为名称的 `module`。 - -对于预定义的模型,我们首先需要在 `llm` `module` 下创建以模型名为文件名称的 YAML 文件,如:`claude-2.1.yaml`。 - -#### 准备模型 YAML - -```yaml -model: claude-2.1 # 模型标识 -# 模型展示名称,可设置 en_US 英文、zh_Hans 中文两种语言,zh_Hans 不设置将默认使用 en_US。 -# 也可不设置 label,则使用 model 标识内容。 -label: - en_US: claude-2.1 -model_type: llm # 模型类型,claude-2.1 为 LLM -features: # 支持功能,agent-thought 为支持 Agent 推理,vision 为支持图片理解 -- agent-thought -model_properties: # 模型属性 - mode: chat # LLM 模式,complete 文本补全模型,chat 对话模型 - context_size: 200000 # 支持最大上下文大小 -parameter_rules: # 模型调用参数规则,仅 LLM 需要提供 -- name: temperature # 调用参数变量名 - # 默认预置了 5 种变量内容配置模板,temperature/top_p/max_tokens/presence_penalty/frequency_penalty - # 可在 use_template 中直接设置模板变量名,将会使用 entities.defaults.PARAMETER_RULE_TEMPLATE 中的默认配置 - # 若设置了额外的配置参数,将覆盖默认配置 - use_template: temperature -- name: top_p - use_template: top_p -- name: top_k - label: # 调用参数展示名称 - zh_Hans: 取样数量 - en_US: Top k - type: int # 参数类型,支持 float/int/string/boolean - help: # 帮助信息,描述参数作用 - zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 - en_US: Only sample from the top K options for each subsequent token. - required: false # 是否必填,可不设置 -- name: max_tokens_to_sample - use_template: max_tokens - default: 4096 # 参数默认值 - min: 1 # 参数最小值,仅 float/int 可用 - max: 4096 # 参数最大值,仅 float/int 可用 -pricing: # 价格信息 - input: '8.00' # 输入单价,即 Prompt 单价 - output: '24.00' # 输出单价,即返回内容单价 - unit: '0.000001' # 价格单位,即上述价格为每 100K 的单价 - currency: USD # 价格货币 -``` - -建议将所有模型配置都准备完毕后再开始模型代码的实现。 - -同样,也可以参考 `model_providers` 目录下其他供应商对应模型类型目录下的 YAML 配置信息,完整的 YAML 规则见:Schema[^1]。 - -#### 实现模型调用代码 - -接下来需要在 `llm` `module` 下创建一个同名的 python 文件 `llm.py` 来编写代码实现。 - -在 `llm.py` 中创建一个 Anthropic LLM 类,我们取名为 `AnthropicLargeLanguageModel`(随意),继承 `__base.large_language_model.LargeLanguageModel` 基类,实现以下几个方法: - -* LLM 调用 - - 实现 LLM 调用的核心方法,可同时支持流式和同步返回。 - - ```python - def _invoke(self, model: str, credentials: dict, - prompt_messages: list[PromptMessage], model_parameters: dict, - tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, - stream: bool = True, user: Optional[str] = None) \ - -> Union[LLMResult, Generator]: - """ - Invoke large language model - - :param model: model name - :param credentials: model credentials - :param prompt_messages: prompt messages - :param model_parameters: model parameters - :param tools: tools for tool calling - :param stop: stop words - :param stream: is stream response - :param user: unique user id - :return: full response or stream response chunk generator result - """ - ``` - - 在实现时,需要注意使用两个函数来返回数据,分别用于处理同步返回和流式返回,因为Python会将函数中包含 `yield` 关键字的函数识别为生成器函数,返回的数据类型固定为 `Generator`,因此同步和流式返回需要分别实现,就像下面这样(注意下面例子使用了简化参数,实际实现时需要按照上面的参数列表进行实现): - - ```python - def _invoke(self, stream: bool, **kwargs) \ - -> Union[LLMResult, Generator]: - if stream: - return self._handle_stream_response(**kwargs) - return self._handle_sync_response(**kwargs) - - def _handle_stream_response(self, **kwargs) -> Generator: - for chunk in response: - yield chunk - def _handle_sync_response(self, **kwargs) -> LLMResult: - return LLMResult(**response) - ``` -* 预计算输入 tokens - - 若模型未提供预计算 tokens 接口,可直接返回 0。 - - ```python - def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], - tools: Optional[list[PromptMessageTool]] = None) -> int: - """ - Get number of tokens for given prompt messages - - :param model: model name - :param credentials: model credentials - :param prompt_messages: prompt messages - :param tools: tools for tool calling - :return: - """ - ``` -* 模型凭据校验 - - 与供应商凭据校验类似,这里针对单个模型进行校验。 - - ```python - def validate_credentials(self, model: str, credentials: dict) -> None: - """ - Validate model credentials - - :param model: model name - :param credentials: model credentials - :return: - """ - ``` -* 调用异常错误映射表 - - 当模型调用异常时需要映射到 Runtime 指定的 `InvokeError` 类型,方便 Dify 针对不同错误做不同后续处理。 - - Runtime Errors: - - * `InvokeConnectionError` 调用连接错误 - * `InvokeServerUnavailableError` 调用服务方不可用 - * `InvokeRateLimitError` 调用达到限额 - * `InvokeAuthorizationError` 调用鉴权失败 - * `InvokeBadRequestError` 调用传参有误 - - ```python - @property - def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: - """ - Map model invoke error to unified error - The key is the error type thrown to the caller - The value is the error type thrown by the model, - which needs to be converted into a unified error type for the caller. - - :return: Invoke error mapping - """ - ``` - -接口方法说明见:[Interfaces](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/interfaces.md),具体实现可参考:[llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py)。 - -[^1]: #### Provider - - * `provider` (string) 供应商标识,如:`openai` - * `label` (object) 供应商展示名称,i18n,可设置 `en_US` 英文、`zh_Hans` 中文两种语言 - * `zh_Hans` (string) \[optional] 中文标签名,`zh_Hans` 不设置将默认使用 `en_US`。 - * `en_US` (string) 英文标签名 - * `description` (object) \[optional] 供应商描述,i18n - * `zh_Hans` (string) \[optional] 中文描述 - * `en_US` (string) 英文描述 - * `icon_small` (string) \[optional] 供应商小 ICON,存储在对应供应商实现目录下的 `_assets` 目录,中英文策略同 `label` - * `zh_Hans` (string) \[optional] 中文 ICON - * `en_US` (string) 英文 ICON - * `icon_large` (string) \[optional] 供应商大 ICON,存储在对应供应商实现目录下的 \_assets 目录,中英文策略同 label - * `zh_Hans` (string) \[optional] 中文 ICON - * `en_US` (string) 英文 ICON - * `background` (string) \[optional] 背景颜色色值,例:#FFFFFF,为空则展示前端默认色值。 - * `help` (object) \[optional] 帮助信息 - * `title` (object) 帮助标题,i18n - * `zh_Hans` (string) \[optional] 中文标题 - * `en_US` (string) 英文标题 - * `url` (object) 帮助链接,i18n - * `zh_Hans` (string) \[optional] 中文链接 - * `en_US` (string) 英文链接 - * `supported_model_types` (array\[ModelType]) 支持的模型类型 - * `configurate_methods` (array\[ConfigurateMethod]) 配置方式 - * `provider_credential_schema` (ProviderCredentialSchema) 供应商凭据规格 - * `model_credential_schema` (ModelCredentialSchema) 模型凭据规格 diff --git a/zh-hans/guides/model-configuration/readme.mdx b/zh-hans/guides/model-configuration/readme.mdx index 74946f63..41c65f66 100644 --- a/zh-hans/guides/model-configuration/readme.mdx +++ b/zh-hans/guides/model-configuration/readme.mdx @@ -19,7 +19,7 @@ Dify 目前已支持主流的模型供应商,例如 OpenAI 的 GPT 系列、An 2. **Embedding 模型**。在知识库中,将分段过的文档做 Embedding 用的是该类型的模型。在使用了知识库的应用中,将用户的提问做 Embedding 处理也是用的该类型的模型。 > 已支持的 Embedding 模型供应商:OpenAI、ZHIPU(ChatGLM)、Jina AI([Jina Embeddings](https://jina.ai/embeddings/)) -3. [**Rerank 模型**](https://docs.dify.ai/v/zh-hans/advanced/retrieval-augment/rerank)。**Rerank 模型用于增强检索能力,改善 LLM 的搜索结果。** +3. [**Rerank 模型**](/zh-hans/advanced/retrieval-augment/rerank)。**Rerank 模型用于增强检索能力,改善 LLM 的搜索结果。** > 已支持的 Rerank 模型供应商:Cohere、Jina AI([Jina Reranker](https://jina.ai/reranker)) 4. **语音转文字模型**。将对话型应用中,将语音转文字用的是该类型的模型。 diff --git a/zh-hans/guides/workflow/additional-feature.mdx b/zh-hans/guides/workflow/additional-feature.mdx index a0f3e224..67cae2e1 100644 --- a/zh-hans/guides/workflow/additional-feature.mdx +++ b/zh-hans/guides/workflow/additional-feature.mdx @@ -10,7 +10,7 @@ Workflow 和 Chatflow 应用均支持开启附加功能以增强使用者的交 @@ -26,7 +26,7 @@ Workflow 类型应用仅支持 **"图片上传"** 功能。开启后,Workflow diff --git a/zh-hans/guides/workflow/nodes/loop.mdx b/zh-hans/guides/workflow/nodes/loop.mdx new file mode 100644 index 00000000..1b131773 --- /dev/null +++ b/zh-hans/guides/workflow/nodes/loop.mdx @@ -0,0 +1,85 @@ +--- +title: Loop +--- + +## What is Loop Node? + +A **Loop** node executes repetitive tasks that depend on previous iteration results until exit conditions are met or the maximum loop count is reached. + +## Loop vs. Iteration + + + + + + + + + + + + + + + + + + + + + +
TypeDependenciesUse Cases
LoopEach iteration depends on previous resultsRecursive operations, optimization problems
IterationIterations execute independentlyBatch processing, parallel data handling
+ +## Configuration + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionExample
Loop Termination ConditionExpression that determines when to exit the loopx < 50, error_rate < 0.01
Maximum Loop CountUpper limit on iterations to prevent infinite loops10, 100, 1000
+ +![Configuration](https://assets-docs.dify.ai/2025/03/13853bfaaa068cdbdeba1b1f75d482f2.png) + +## Usage Example + +**Goal: Generate random numbers (1-100) until a value below 50 appears.** + +**Steps**: + +1. Use `node` to generate a random number between 1-100. + +2. Use `if` to evaluate the number: + + - If < 50: Output `done` and terminate loop. + + - If ≥ 50: Continue loop and generate another random number. + +3. Set the exit criterion to random_number < 50. + +4. Loop ends when a number below 50 appears. + +![Steps](https://assets-docs.dify.ai/2025/03/b1c277001fc3cb1fbb85fe7c22a6d0fc.png) + +## Planned Enhancements + +**Future releases will include:** + + - Loop variables: Store and reference values across iterations for improved state management and conditional logic. + + - `break` node: Terminate loops from within the execution path, enabling more sophisticated control flow patterns. diff --git a/zh-hans/introduction.mdx b/zh-hans/introduction.mdx index ceb9e2ae..01e18d1c 100644 --- a/zh-hans/introduction.mdx +++ b/zh-hans/introduction.mdx @@ -2,7 +2,7 @@ title: 产品简介 --- -**Dify** 是一款开源的大语言模型(LLM) 应用开发平台。它融合了后端即服务(Backend as Service)和 [LLMOps](learn-more/extended-reading/what-is-llmops.md) 的理念,使开发者可以快速搭建生产级的生成式 AI 应用。即使你是非技术人员,也能参与到 AI 应用的定义和数据运营过程中。 +**Dify** 是一款开源的大语言模型(LLM) 应用开发平台。它融合了后端即服务(Backend as Service)和 [LLMOps](learn-more/extended-reading/what-is-llmops) 的理念,使开发者可以快速搭建生产级的生成式 AI 应用。即使你是非技术人员,也能参与到 AI 应用的定义和数据运营过程中。 由于 Dify 内置了构建 LLM 应用所需的关键技术栈,包括对数百个模型的支持、直观的 Prompt 编排界面、高质量的 RAG 引擎、稳健的 Agent 框架、灵活的流程编排,并同时提供了一套易用的界面和 API。这为开发者节省了许多重复造轮子的时间,使其可以专注在创新和业务需求上。 @@ -30,7 +30,7 @@ Dify 一词源自 Define + Modify,意指定义并且持续的改进你的 AI ### 下一步行动 -* 阅读[**快速开始**](guides/application-orchestrate/creating-an-application.md),速览 Dify 的应用构建流程 +* 阅读[**快速开始**](guides/application-orchestrate/creating-an-application/readme),速览 Dify 的应用构建流程 * 了解如何[**自部署 Dify 到服务器**](getting-started/install-self-hosted/)上,并[**接入开源模型**](guides/model-configuration/) -* 了解 Dify 的[**特性规格**](getting-started/readme/features-and-specifications.md)和 **Roadmap** +* 了解 Dify 的[**特性规格**](getting-started/readme/features-and-specifications)和 **Roadmap** * 在 [**GitHub**](https://github.com/langgenius/dify) 上为我们点亮一颗星,并阅读我们的**贡献指南** diff --git a/zh-hans/learn-more/extended-reading/retrieval-augment/rerank.mdx b/zh-hans/learn-more/extended-reading/retrieval-augment/rerank.mdx index c656c339..2d185468 100644 --- a/zh-hans/learn-more/extended-reading/retrieval-augment/rerank.mdx +++ b/zh-hans/learn-more/extended-reading/retrieval-augment/rerank.mdx @@ -51,6 +51,6 @@ Dify 目前已支持 Cohere Rerank 模型,进入“模型供应商-> Cohere” 进入“提示词编排->上下文->设置”页面中设置为多路召回模式时需开启 Rerank 模型。 -查看更多关于多路召回模式的说明,[《多路召回》](https://docs.dify.ai/v/zh-hans/guides/knowledge-base/integrate-knowledge-within-application#duo-lu-zhao-hui-tui-jian)。 +查看更多关于多路召回模式的说明,[《多路召回》](/zh-hans/guides/knowledge-base/integrate-knowledge-within-application#duo-lu-zhao-hui-tui-jian)。 ![知识库多路召回模式中设置 Rerank 模型](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/learn-more/extended-reading/retrieval-augment/24e43084b10f144c718c5b8545c1e4b4.png) diff --git a/zh-hans/learn-more/extended-reading/retrieval-augment/retrieval.mdx b/zh-hans/learn-more/extended-reading/retrieval-augment/retrieval.mdx index 9b133630..5af9b092 100644 --- a/zh-hans/learn-more/extended-reading/retrieval-augment/retrieval.mdx +++ b/zh-hans/learn-more/extended-reading/retrieval-augment/retrieval.mdx @@ -11,7 +11,7 @@ title: 召回模式 根据用户意图同时匹配所有知识库,从多路知识库查询相关文本片段,经过重排序步骤,从多路查询结果中选择匹配用户问题的最佳结果,需配置 Rerank 模型 API。在多路召回模式下,检索器会在所有与应用关联的知识库中去检索与用户问题相关的文本内容,并将多路召回的相关文档结果合并,并通过 Rerank 模型对检索召回的文档进行语义重排序。 -在多路召回模式下,建议配置 Rerank 模型。你可以阅读 [重排序](https://docs.dify.ai/v/zh-hans/learn-more/extended-reading/retrieval-augment/rerank) 了解更多。 +在多路召回模式下,建议配置 Rerank 模型。你可以阅读 [重排序](/zh-hans/learn-more/extended-reading/retrieval-augment/rerank) 了解更多。 以下是多路召回模式的技术流程图: diff --git a/zh-hans/learn-more/faq/README.mdx b/zh-hans/learn-more/faq/README.mdx index d5fb6440..2ab21125 100644 --- a/zh-hans/learn-more/faq/README.mdx +++ b/zh-hans/learn-more/faq/README.mdx @@ -3,6 +3,6 @@ title: 常见问题 --- -[本地部署相关常见问题](https://docs.dify.ai/v/zh-hans/getting-started/faq/install-faq) +[本地部署相关常见问题](/zh-hans/getting-started/faq/install-faq) -[LLM 配置与使用相关常见问题](https://docs.dify.ai/v/zh-hans/getting-started/faq/llms-use-faq) \ No newline at end of file +[LLM 配置与使用相关常见问题](/zh-hans/getting-started/faq/llms-use-faq) \ No newline at end of file diff --git a/zh-hans/learn-more/faq/install-faq.mdx b/zh-hans/learn-more/faq/install-faq.mdx index 6c9dad25..ac2b6875 100644 --- a/zh-hans/learn-more/faq/install-faq.mdx +++ b/zh-hans/learn-more/faq/install-faq.mdx @@ -101,7 +101,7 @@ FileNotFoundError: File not found ### 11. 本地部署版如何解决知识库文档上传的大小限制和数量限制。 -可参考官网[环境变量说明文档](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments)去配置。 +可参考官网[环境变量说明文档](/zh-hans/getting-started/install-self-hosted/environments)去配置。 ### 12. 本地部署版如何通过邮箱邀请成员? @@ -113,7 +113,7 @@ FileNotFoundError: File not found Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer. ``` -可参考官网[环境变量说明文档](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments)去配置。以及相关 [Issue](https://github.com/langgenius/dify/issues/1261)。 +可参考官网[环境变量说明文档](/zh-hans/getting-started/install-self-hosted/environments)去配置。以及相关 [Issue](https://github.com/langgenius/dify/issues/1261)。 ### 14. 本地部署 80 端口被占用应该如何解决? diff --git a/zh-hans/learn-more/faq/llms-use-faq.mdx b/zh-hans/learn-more/faq/llms-use-faq.mdx index f1ad0c24..fad18b9a 100644 --- a/zh-hans/learn-more/faq/llms-use-faq.mdx +++ b/zh-hans/learn-more/faq/llms-use-faq.mdx @@ -102,7 +102,7 @@ Query or prefix prompt is too long, you can reduce the preix prompt, or shrink t ### 15. 知识库文档上传的大小限制有哪些? -目前知识库文档上传单个文档最大是 15MB,总文档数量限制 100 个。如你本地部署版本需要调整修改该限制,请参考[文档](https://docs.dify.ai/v/zh-hans/getting-started/faq/install-faq#11.-ben-di-bu-shu-ban-ru-he-jie-jue-shu-ju-ji-wen-dang-shang-chuan-de-da-xiao-xian-zhi-he-shu-liang)。 +目前知识库文档上传单个文档最大是 15MB,总文档数量限制 100 个。如你本地部署版本需要调整修改该限制,请参考[文档](/zh-hans/getting-started/faq/install-faq#11.-ben-di-bu-shu-ban-ru-he-jie-jue-shu-ju-ji-wen-dang-shang-chuan-de-da-xiao-xian-zhi-he-shu-liang)。 ### 16. 为什么选择了 Claude 模型,还是会消耗 OpenAI 的费用? @@ -110,7 +110,7 @@ Query or prefix prompt is too long, you can reduce the preix prompt, or shrink t ### 17. 有什么方式能控制更多地使用上下文数据而不是模型自身生成能力吗? -是否使用知识库,会和知识库的描述有关系,尽可能把知识库描述写清楚,具体可参考[此文档编写技巧](https://docs.dify.ai/v/zh-hans/advanced/datasets)。 +是否使用知识库,会和知识库的描述有关系,尽可能把知识库描述写清楚,具体可参考[此文档编写技巧](/zh-hans/advanced/datasets)。 ### 18. 上传知识库文档是 Excel,该如何更好地分段? diff --git a/zh-hans/learn-more/use-cases/dify-on-dingtalk.mdx b/zh-hans/learn-more/use-cases/dify-on-dingtalk.mdx index 9d30c379..e8f2c87a 100644 --- a/zh-hans/learn-more/use-cases/dify-on-dingtalk.mdx +++ b/zh-hans/learn-more/use-cases/dify-on-dingtalk.mdx @@ -28,7 +28,7 @@ IM 是天然的智能聊天机器人应用场景,校企用户有不少是使 ### 2.1. 创建 Dify 应用 -创建 Dify 应用在本文就不赘述了,可以参考非常详实的 [Dify 官方文档](https://docs.dify.ai/v/zh-hans/guides/application-orchestrate/creating-an-application)。这里你需要知道的是,本文介绍的方法支持接入的 Dify 应用包含了 Dify 目前所有类型。 +创建 Dify 应用在本文就不赘述了,可以参考非常详实的 [Dify 官方文档](/zh-hans/guides/application-orchestrate/creating-an-application)。这里你需要知道的是,本文介绍的方法支持接入的 Dify 应用包含了 Dify 目前所有类型。 下图是一个简易方法让你快速识别自己应用的类型,**后续配置时需要明确写明接入应用的类型**。 diff --git a/zh-hans/learn-more/use-cases/dify-on-teams.mdx b/zh-hans/learn-more/use-cases/dify-on-teams.mdx index e05d57b7..18ef5473 100644 --- a/zh-hans/learn-more/use-cases/dify-on-teams.mdx +++ b/zh-hans/learn-more/use-cases/dify-on-teams.mdx @@ -20,7 +20,7 @@ title: 使用 Dify 和 Azure Bot Framework 构建 Microsoft Teams 机器人 ## 3. 创建 Dify 基础编排聊天助手应用 -首先,登录 [Dify 平台](https://cloud.dify.ai/signin),使用 Github 登录或者使用 Google 登录。此外,你也可以参考 Dify 官方教程 [Docker Compose 部署](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) 私有部署。 +首先,登录 [Dify 平台](https://cloud.dify.ai/signin),使用 Github 登录或者使用 Google 登录。此外,你也可以参考 Dify 官方教程 [Docker Compose 部署](/zh-hans/getting-started/install-self-hosted/docker-compose) 私有部署。 ![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/learn-more/use-cases/08ece9f6dd15b6ba3c44eb187ea73bdd.jpeg) diff --git a/zh-hans/learn-more/use-cases/dify-on-wechat.mdx b/zh-hans/learn-more/use-cases/dify-on-wechat.mdx index d29308e5..ad5cce8a 100644 --- a/zh-hans/learn-more/use-cases/dify-on-wechat.mdx +++ b/zh-hans/learn-more/use-cases/dify-on-wechat.mdx @@ -26,7 +26,7 @@ Dify是一个优秀的LLMOps(大型语言模型运维)平台,Dify的详细 **(2)登录Dify官方应用平台** -首先,登录[Dify官方应用平台](https://cloud.dify.ai/signin),你可以选择使用Github登录或者使用Google登录。此外,你也可以参考Dify官方教程[Docker Compose 部署 | 中文 | Dify](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) 私有部署,Dify是开源项目,支持私有部署。 +首先,登录[Dify官方应用平台](https://cloud.dify.ai/signin),你可以选择使用Github登录或者使用Google登录。此外,你也可以参考Dify官方教程[Docker Compose 部署 | 中文 | Dify](/zh-hans/getting-started/install-self-hosted/docker-compose) 私有部署,Dify是开源项目,支持私有部署。 ![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/learn-more/use-cases/4d59263223217b94f8c7b4fb8a2951b4.jpeg) diff --git a/zh-hans/learn-more/use-cases/dify-on-whatsapp.mdx b/zh-hans/learn-more/use-cases/dify-on-whatsapp.mdx index fab0cc7b..f9404737 100644 --- a/zh-hans/learn-more/use-cases/dify-on-whatsapp.mdx +++ b/zh-hans/learn-more/use-cases/dify-on-whatsapp.mdx @@ -23,7 +23,7 @@ title: 使用 Dify 和 Twilio 构建 WhatsApp 机器人 ## 3. 创建Dify基础编排聊天助手应用 (節錄自[手把手教你把 Dify 接入微信生态](dify-on-wechat.md)) -首先,登录[Dify官方应用平台](https://cloud.dify.ai/signin),你可以选择使用Github登录或者使用Google登录。此外,你也可以参考Dify官方教程[Docker Compose 部署 | 中文 | Dify](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) 私有部署,Dify是开源项目,支持私有部署。 +首先,登录[Dify官方应用平台](https://cloud.dify.ai/signin),你可以选择使用Github登录或者使用Google登录。此外,你也可以参考Dify官方教程[Docker Compose 部署 | 中文 | Dify](/zh-hans/getting-started/install-self-hosted/docker-compose) 私有部署,Dify是开源项目,支持私有部署。 ![](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/learn-more/use-cases/08ece9f6dd15b6ba3c44eb187ea73bdd.jpeg) diff --git a/zh-hans/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.mdx b/zh-hans/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.mdx index 121e07fe..f33b2e7c 100644 --- a/zh-hans/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.mdx +++ b/zh-hans/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.mdx @@ -11,7 +11,7 @@ Wix 是一个非常流行的网站创建平台,它允许用户通过拖拽的 ## 1. 获取 Dify 应用的 iFrame 代码片段 -假设你已创建了一个 [Dify AI 应用](https://docs.dify.ai/v/zh-hans/guides/application-orchestrate/creating-an-application),你可以通过以下步骤获取 Dify 应用的 iFrame 代码片段: +假设你已创建了一个 [Dify AI 应用](/zh-hans/guides/application-orchestrate/creating-an-application),你可以通过以下步骤获取 Dify 应用的 iFrame 代码片段: * 登录你的 Dify 账户 * 选择你想要嵌入的 Dify 应用 diff --git a/zh-hans/learn-more/use-cases/how-to-make-llm-app-provide-a-progressive-chat-experience.mdx b/zh-hans/learn-more/use-cases/how-to-make-llm-app-provide-a-progressive-chat-experience.mdx index c287a684..ea7b8c16 100644 --- a/zh-hans/learn-more/use-cases/how-to-make-llm-app-provide-a-progressive-chat-experience.mdx +++ b/zh-hans/learn-more/use-cases/how-to-make-llm-app-provide-a-progressive-chat-experience.mdx @@ -97,4 +97,4 @@ title: 如何让 LLM 应用提供循序渐进的聊天体验? ![](https://assets-docs.dify.ai/2025/01/3b99ffe6ee3425789fc08da0f267afa0.png) -> 如果还想要了解更多关于工作流的编排技巧,请参考[《工作流》](https://docs.dify.ai/v/zh-hans/guides/workflow)。 +> 如果还想要了解更多关于工作流的编排技巧,请参考[《工作流》](/zh-hans/guides/workflow)。 diff --git a/zh-hans/learn-more/use-cases/practical-implementation-of-building-llm-applications-using-a-full-set-of-open-source-tools.mdx b/zh-hans/learn-more/use-cases/practical-implementation-of-building-llm-applications-using-a-full-set-of-open-source-tools.mdx index b1a4fb91..e9cc7171 100644 --- a/zh-hans/learn-more/use-cases/practical-implementation-of-building-llm-applications-using-a-full-set-of-open-source-tools.mdx +++ b/zh-hans/learn-more/use-cases/practical-implementation-of-building-llm-applications-using-a-full-set-of-open-source-tools.mdx @@ -208,7 +208,7 @@ export PATH=$PATH:/usr/local/cuda-12.2/lib64 ## 部署推理服务 Xinference -根据 Dify 的[部署文档](https://docs.dify.ai/v/zh-hans/advanced/model-configuration/xinference),Xinference 支持的模型种类很多。本次以 Baichuan-13B-Chat 为例。 +根据 Dify 的[部署文档](/zh-hans/advanced/model-configuration/xinference),Xinference 支持的模型种类很多。本次以 Baichuan-13B-Chat 为例。 > [Xorbits inference](https://github.com/xorbitsai/inference) 是一个强大且通用的分布式推理框架,旨在为大型语言模型、语音识别模型和多模态模型提供服务,甚至可以在笔记本电脑上使用。它支持多种与 GGML 兼容的模型,如 ChatGLM,Baichuan,Whisper,Vicuna,Orca 等。 Dify 支持以本地部署的方式接入 Xinference 部署的大型语言模型推理和 Embedding 能力。 @@ -276,7 +276,7 @@ UID Type Name Format Size (i ## 部署 Dify.AI -主要流程参考官网[部署文档](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose)。 +主要流程参考官网[部署文档](/zh-hans/getting-started/install-self-hosted/docker-compose)。 #### Clone Dify diff --git a/zh-hans/url-report/link-check-report-error.md b/zh-hans/url-report/link-check-report-error.md new file mode 100644 index 00000000..a54a7fc3 --- /dev/null +++ b/zh-hans/url-report/link-check-report-error.md @@ -0,0 +1,173 @@ +# GitBook链接检查报告(仅错误链接) + +本报告仅显示文档中的无效链接。每行的格式为: +* [文档标题](文档链接) | [无效链接](链接路径) ❌ + +## 来自 [成为贡献者](community/contribution.md) + +* [成为贡献者](community/contribution.md) ✅ | [此指南](https://github.com/langgenius/dify/blob/main/api/core/tools/README_CN.md) ❌ + +## 来自 [接入 AWS Bedrock 上的模型(DeepSeek)](development/models-integration/aws-bedrock-deepseek.md) + +* [接入 AWS Bedrock 上的模型(DeepSeek)](development/models-integration/aws-bedrock-deepseek.md) ✅ | [Dify.AI 账号](https://cloud.dify.ai/) ❌ + +## 来自 [接入 Hugging Face 上的开源模型](development/models-integration/hugging-face.md) + +* [接入 Hugging Face 上的开源模型](development/models-integration/hugging-face.md) ✅ | [注册地址](https://huggingface.co/join) ❌ + +## 来自 [接入 LocalAI 部署的本地模型](development/models-integration/localai.md) + +* [接入 LocalAI 部署的本地模型](development/models-integration/localai.md) ✅ | [LocalAI Data query example](https://github.com/go-skynet/LocalAI/blob/master/examples/langchain-chroma/README.md) ❌ + +## 来自 [接入 Xinference 部署的本地模型](development/models-integration/xinference.md) + +* [接入 Xinference 部署的本地模型](development/models-integration/xinference.md) ✅ | [本地部署](https://github.com/xorbitsai/inference/blob/main/README\_zh\_CN.md#%E6%9C%AC%E5%9C%B0%E9%83%A8%E7%BD%B2) ❌ | [分布式部署](https://github.com/xorbitsai/inference/blob/main/README\_zh\_CN.md#%E5%88%86%E5%B8%83%E5%BC%8F%E9%83%A8%E7%BD%B2) ❌ | [Xinference embed 模型](https://github.com/xorbitsai/inference/blob/main/README\_zh\_CN.md#%E5%86%85%E7%BD%AE%E6%A8%A1%E5%9E%8B) ❌ | [Xorbits Inference](https://github.com/xorbitsai/inference/blob/main/README\_zh\_CN.md) ❌ + +## 来自 [Dify Premium](getting-started/dify-premium.md) + +* [Dify Premium](getting-started/dify-premium.md) ✅ | [AWS AMI](https://docs.aws.amazon.com/zh\_cn/AWSEC2/latest/UserGuide/ec2-instances-and-amis.html) ❌ + +## 来自 [环境变量说明](getting-started/install-self-hosted/environments.md) + +* [环境变量说明](getting-started/install-self-hosted/environments.md) ✅ | [文档]( https://www.volcengine.com/docs/6349/107356) ❌ | [工具](https://github.com/langgenius/dify/blob/main/api/core/tools/provider/_position.yaml) ❌ + +## 来自 [模型供应商列表](getting-started/readme/model-providers.md) + +* [模型供应商列表](getting-started/readme/model-providers.md) ✅ | [contribution.md](../../community/contribution.md "mention") ❌ + +## 来自 [敏感内容审查](guides/application-orchestrate/app-toolkits/moderation-tool.md) + +* [敏感内容审查](guides/application-orchestrate/app-toolkits/moderation-tool.md) ✅ | [moderation.md](../../extension/api-based-extension/moderation.md "mention") ❌ + +## 来自 [嵌入网站](guides/application-publishing/embedding-in-websites.md) + +* [嵌入网站](guides/application-publishing/embedding-in-websites.md) ✅ | [https://dev.udify.app](https://dev.udify.app) ❌ | [https://udify.app](https://udify.app) ❌ + +## 来自 [发布为公开 Web 站点](guides/application-publishing/launch-your-webapp-quickly/README.md) + +* [发布为公开 Web 站点](guides/application-publishing/launch-your-webapp-quickly/README.md) ✅ | [https://udify.app/](https://udify.app/) ❌ + +## 来自 [API 扩展](guides/extension/api-based-extension/README.md) + +* [API 扩展](guides/extension/api-based-extension/README.md) ✅ | [外部数据工具](../../knowledge-base/external-data-tool.md "mention") ❌ | [cloudflare-workers.md](cloudflare-workers.md "mention") ❌ + +## 来自 [代码扩展](guides/extension/code-based-extension/README.md) + +* [代码扩展](guides/extension/code-based-extension/README.md) ✅ | [外部数据工具](external-data-tool.md "mention") ❌ | [敏感内容审核](moderation.md "mention") ❌ + +## 来自 [外部数据工具](guides/extension/code-based-extension/external-data-tool.md) + +* [外部数据工具](guides/extension/code-based-extension/external-data-tool.md) ✅ | [api-based-extension](../api-based-extension/ "mention") ❌ + +## 来自 [连接外部知识库](guides/knowledge-base/connect-external-knowledge-base.md) + +* [连接外部知识库](guides/knowledge-base/connect-external-knowledge-base.md) ✅ | [how-to-connect-aws-bedrock.md](../../learn-more/use-cases/how-to-connect-aws-bedrock.md "mention") ❌ + +## 来自 [接入大模型](guides/model-configuration/README.md) + +* [接入大模型](guides/model-configuration/README.md) ✅ | [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service/) ❌ + +## 来自 [自定义模型接入](guides/model-configuration/customizable-model.md) + +* [自定义模型接入](guides/model-configuration/customizable-model.md) ✅ | [Interfaces](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/interfaces.md) ❌ | [llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py) ❌ + +## 来自 [接口方法](guides/model-configuration/interfaces.md) + +* [接口方法](guides/model-configuration/interfaces.md) ✅ | [openai](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py) ❌ + +## 来自 [增加新供应商](guides/model-configuration/new-provider.md) + +* [增加新供应商](guides/model-configuration/new-provider.md) ✅ | [Provider Schema](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md) ❌ | [AI Model Entity](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md#aimodelentity) ❌ | [`model_credential_schema`](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md) ❌ | [YAML 配置信息](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md) ❌ | [AnthropicProvider](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/model\_providers/anthropic/anthropic.py) ❌ + +## 来自 [预定义模型接入](guides/model-configuration/predefined-model.md) + +* [预定义模型接入](guides/model-configuration/predefined-model.md) ✅ | [Interfaces](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/interfaces.md) ❌ | [llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py) ❌ + +## 来自 [Perplexity Search](guides/tools/tool-configuration/perplexity.md) + +* [Perplexity Search](guides/tools/tool-configuration/perplexity.md) ✅ | [Perplexity](https://www.perplexity.ai/settings/api) ❌ + +## 来自 [Youtube](guides/tools/tool-configuration/youtube.md) + +* [Youtube](guides/tools/tool-configuration/youtube.md) ✅ | [Dify 工具页](https://cloud.dify.ai/tools) ❌ + +## 来自 [文档提取器](guides/workflow/node/doc-extractor.md) + +* [文档提取器](guides/workflow/node/doc-extractor.md) ✅ | [list-operator.md](list-operator.md "mention") ❌ + +## 来自 [如何使用 JSON Schema 让 LLM 输出遵循结构化格式的内容?](learn-more/extended-reading/how-to-use-json-schema-in-dify.md) + +* [如何使用 JSON Schema 让 LLM 输出遵循结构化格式的内容?](learn-more/extended-reading/how-to-use-json-schema-in-dify.md) ✅ | [Introduction to Structured Outputs](https://cookbook.openai.com/examples/structured\_outputs\_intro) ❌ + +## 来自 [LLM 配置与使用](learn-more/faq/llms-use-faq.md) + +* [LLM 配置与使用](learn-more/faq/llms-use-faq.md) ✅ | [余弦相似度](https://en.wikipedia.org/wiki/Cosine\_similarity) ❌ + +## 来自 [如何在 Dify 内体验大模型“竞技场”?以 DeepSeek R1 VS o1 为例](learn-more/use-cases/dify-model-arena.md) + +* [如何在 Dify 内体验大模型“竞技场”?以 DeepSeek R1 VS o1 为例](learn-more/use-cases/dify-model-arena.md) ✅ | [“多模型调试”](/zh_CN/guides/application-orchestrate/multiple-llms-debugging.md) ❌ | [多模型调试](/zh_CN/guides/application-orchestrate/multiple-llms-debugging.md) ❌ + +## 来自 [使用 Dify 和 Azure Bot Framework 构建 Microsoft Teams 机器人](learn-more/use-cases/dify-on-teams.md) + +* [使用 Dify 和 Azure Bot Framework 构建 Microsoft Teams 机器人](learn-more/use-cases/dify-on-teams.md) ✅ | [Azure 账户](https://azure.microsoft.com/en-us/free) ❌ + +## 来自 [手把手教你把 Dify 接入微信生态](learn-more/use-cases/dify-on-wechat.md) + +* [手把手教你把 Dify 接入微信生态](learn-more/use-cases/dify-on-wechat.md) ✅ | [官方下载链接](https://dldir1.qq.com/wework/work\_weixin/WeCom\_4.0.8.6027.exe) ❌ + +## 来自 [DeepSeek 与 Dify 集成指南:打造具备多轮思考的 AI 应用](learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.md) + +* [DeepSeek 与 Dify 集成指南:打造具备多轮思考的 AI 应用](learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.md) ✅ | [**本地私有化部署 DeepSeek + Dify**](broken-reference) ❌ | [本地部署指南](broken-reference) ❌ | [本地部署 DeepSeek + Dify,构建你的专属私有 AI 助手](broken-reference) ❌ + +## 来自 [使用全套开源工具构建 LLM 应用实战:在 Dify 调用 Baichuan 开源模型能力](learn-more/use-cases/practical-implementation-of-building-llm-applications-using-a-full-set-of-open-source-tools.md) + +* [使用全套开源工具构建 LLM 应用实战:在 Dify 调用 Baichuan 开源模型能力](learn-more/use-cases/practical-implementation-of-building-llm-applications-using-a-full-set-of-open-source-tools.md) ✅ | [博客](https://www.cnblogs.com/tuilk/p/16287472.html) ❌ | [http://localhost:9997](http://localhost:9997) ❌ + +## 来自 [开发 Slack Bot 插件](plugins/best-practice/develop-a-slack-bot-plugin.md) + +* [开发 Slack Bot 插件](plugins/best-practice/develop-a-slack-bot-plugin.md) ✅ | [初始化开发工具](../initialize-development-tools.md) ❌ | [快速开始:开发 Extension 插件](../extension-plugin.md) ❌ | [反向调用:App](../../../schema-definition/reverse-invocation-of-the-dify-service/app.md) ❌ | [开发 Extension 插件](../extension-plugin.md) ❌ | [开发 Model 插件](../model-plugin/) ❌ | [Bundle 类型插件:将多个插件打包](../bundle.md) ❌ | [Manifest](../../../schema-definition/manifest.md) ❌ | [Endpoint](../../../schema-definition/endpoint.md) ❌ | [反向调用 Dify 能力](../../../schema-definition/reverse-invocation-of-the-dify-service/) ❌ | [工具](../../../schema-definition/tool.md) ❌ | [模型](../../../schema-definition/model/) ❌ + +## 来自 [功能简介](plugins/introduction.md) + +* [功能简介](plugins/introduction.md) ✅ | [本地文件](publish-plugins/package-and-publish-plugin-file.md) ❌ + +## 来自 [发布至个人 GitHub 仓库](plugins/publish-plugins/publish-plugin-on-personal-github-repo.md) + +* [发布至个人 GitHub 仓库](plugins/publish-plugins/publish-plugin-on-personal-github-repo.md) ✅ | [打包插件](broken-reference) ❌ + +## 来自 [插件调试](plugins/quick-start/debug-plugin.md) + +* [插件调试](plugins/quick-start/debug-plugin.md) ✅ | [“插件管理”](https://cloud.dify.ai/plugins) ❌ + +## 来自 [插件开发](plugins/quick-start/develop-plugins/README.md) + +* [插件开发](plugins/quick-start/develop-plugins/README.md) ✅ | [app.md](../../schema-definition/reverse-invocation-of-the-dify-service/app.md "mention") ❌ | [model.md](../../schema-definition/reverse-invocation-of-the-dify-service/model.md "mention") ❌ | [node.md](../../schema-definition/reverse-invocation-of-the-dify-service/node.md "mention") ❌ | [tool.md](../../schema-definition/reverse-invocation-of-the-dify-service/tool.md "mention") ❌ + +## 来自 [Agent 策略插件](plugins/quick-start/develop-plugins/agent-strategy-plugin.md) + +* [Agent 策略插件](plugins/quick-start/develop-plugins/agent-strategy-plugin.md) ✅ | [“插件管理”](https://console-plugin.dify.dev/plugins) ❌ + +## 来自 [Model 插件](plugins/quick-start/develop-plugins/model-plugin/README.md) + +* [Model 插件](plugins/quick-start/develop-plugins/model-plugin/README.md) ✅ | [调试插件](../../debug-plugins.md) ❌ + +## 来自 [接入自定义模型](plugins/quick-start/develop-plugins/model-plugin/customizable-model.md) + +* [接入自定义模型](plugins/quick-start/develop-plugins/model-plugin/customizable-model.md) ✅ | [debug-plugins.md](../../debug-plugins.md) ❌ + +## 来自 [Tool 插件](plugins/quick-start/develop-plugins/tool-plugin.md) + +* [Tool 插件](plugins/quick-start/develop-plugins/tool-plugin.md) ✅ | [“插件管理”](https://cloud.dify.ai/plugins) ❌ + +## 来自 [Agent](plugins/schema-definition/agent.md) + +* [Agent](plugins/schema-definition/agent.md) ✅ | [Manifest](/zh_CN/plugins/schema-definition/manifest) ❌ + +## 来自 [如何搭建 AI 图片生成应用](workshop/basic/build-ai-image-generation-app.md) + +* [如何搭建 AI 图片生成应用](workshop/basic/build-ai-image-generation-app.md) ✅ | [Dify - 工具 - Stability](https://cloud.dify.ai/tools) ❌ + +## 来自 [ChatFlow 实战:搭建 Twitter 账号分析助手](workshop/intermediate/twitter-chatflow.md) + +* [ChatFlow 实战:搭建 Twitter 账号分析助手](workshop/intermediate/twitter-chatflow.md) ✅ | [February 2, 2023](https://twitter.com/XDevelopers/status/1621026986784337922?ref\_src=twsrc%5Etfw) ❌ | [Dify](https://cloud.dify.ai/) ❌ | [云服务](https://cloud.dify.ai/) ❌ | [`https%3A%2F%2Ftwitter.com%2Felonmusk`](https://twitter.com/elonmusk) ❌ | [X@dify\_ai](https://x.com/dify\_ai) ❌ + diff --git a/zh-hans/url-report/link-check-report.md b/zh-hans/url-report/link-check-report.md new file mode 100644 index 00000000..a6cc2e69 --- /dev/null +++ b/zh-hans/url-report/link-check-report.md @@ -0,0 +1,258 @@ +# GitBook链接检查报告(完整版) + +本报告显示了GitBook文档中的所有链接及其引用的文档。每行的格式为: +* [文档标题](文档链接) | [引用的文档1](链接1) | [引用的文档2](链接2) | ... + + +## 入门 + +* [欢迎使用 Dify](README.md) ✅ | [LLMOps](learn-more/extended-reading/what-is-llmops.md) ✅ | [**快速开始**](guides/application-orchestrate/creating-an-application.md) ✅ | [**自部署 Dify 到服务器**](getting-started/install-self-hosted/) ✅ | [**接入开源模型**](guides/model-configuration/) ✅ | [**特性规格**](getting-started/readme/features-and-specifications.md) ✅ | [**GitHub**](https://github.com/langgenius/dify) ✅ + * [特性与技术规格](getting-started/readme/features-and-specifications.md) ✅ + * [模型供应商列表](getting-started/readme/model-providers.md) ✅ | [请求](https://github.com/langgenius/dify/discussions/categories/ideas) ✅ | [contribution.md](../../community/contribution.md "mention") ❌ +* [云服务](getting-started/cloud.md) ✅ | [云服务](http://cloud.dify.ai) ✅ | [Dify 云服务](https://cloud.dify.ai) ✅ | [创建应用](../guides/application-orchestrate/creating-an-application.md) ✅ | [此处](https://dify.ai/pricing) ✅ +* [社区版](getting-started/install-self-hosted/README.md) ✅ | [Docker Compose 部署](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) ✅ | [本地源码启动](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/local-source-code) ✅ | [Dify 社区版](https://github.com/langgenius/dify) ✅ | [贡献指南](https://github.com/langgenius/dify/blob/main/CONTRIBUTING_CN.md) ✅ + * [Docker Compose 部署](getting-started/install-self-hosted/docker-compose.md) ✅ | [《在 Mac 内安装 Docker 桌面端》](https://docs.docker.com/desktop/install/mac-install/) ✅ | [安装 Docker](https://docs.docker.com/engine/install/) ✅ | [安装 Docker Compose](https://docs.docker.com/compose/install/) ✅ | [使用 WSL 2 后端在 Windows 上安装 Docker Desktop](https://docs.docker.com/desktop/windows/install/#wsl-2-backend) ✅ | [Docker 官方文档](https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command) ✅ | [常见问题](faq.md) ✅ + * [本地源码启动](getting-started/install-self-hosted/local-source-code.md) ✅ | [在 Mac 上安装 Docker Desktop](https://docs.docker.com/desktop/mac/install/) ✅ | [安装 Docker](https://docs.docker.com/engine/install/) ✅ | [安装 Docker Compose](https://docs.docker.com/compose/install/) ✅ | [使用 WSL 2 后端在 Windows 上安装 Docker Desktop](https://docs.docker.com/desktop/windows/install/#wsl-2-backend) ✅ | [Link](https://docs.dify.ai/v/zh-hans/learn-more/faq/install-faq#id-15.-wen-ben-zhuan-yu-yin-yu-dao-zhe-ge-cuo-wu-zen-me-ban) ✅ | [pyenv](https://github.com/pyenv/pyenv) ✅ | [Poetry](https://python-poetry.org/docs/) ✅ | [Node.js v18.x (LTS)](http://nodejs.org) ✅ | [NPM 版本 8.x.x ](https://www.npmjs.com/) ✅ | [Yarn](https://yarnpkg.com/) ✅ + * [宝塔面板部署](getting-started/install-self-hosted/bt-panel.md) ✅ | [安装宝塔面板](https://www.bt.cn/new/download.html) ✅ + * [单独启动前端 Docker 容器](getting-started/install-self-hosted/start-the-frontend-docker-container.md) ✅ | [http://127.0.0.1:3000](http://127.0.0.1:3000) ✅ + * [环境变量说明](getting-started/install-self-hosted/environments.md) ✅ | [跨域 / 身份相关指南](https://docs.dify.ai/v/zh-hans/learn-more/faq/install-faq#id-3.-an-zhuang-shi-hou-wu-fa-deng-lu-deng-lu-cheng-gong-dan-hou-xu-jie-kou-jun-ti-shi-401) ✅ | [文档](https://help.aliyun.com/zh/oss/user-guide/regions-and-endpoints) ✅ | [文档](https://help.aliyun.com/zh/oss/user-guide/regions-and-endpoints) ✅ | [文档](https://api.aliyun.com/troubleshoot?q=0016-00000005) ✅ | [文档](https://support.huaweicloud.com/sdk-python-devg-obs/obs_22_0500.html) ✅ | [文档]( https://www.volcengine.com/docs/6349/107356) ❌ | [文档](https://www.volcengine.com/docs/6349/107356) ✅ | [文档](https://weaviate.io/developers/weaviate/manage-data/import#how-to-set-batch-parameters) ✅ | [Zilliz Cloud](https://docs.zilliz.com.cn/docs/free-trials) ✅ | [MyScale 文档](https://myscale.com/docs/en/text-search/#understanding-fts-index-parameters) ✅ | [Analyticdb 文档](https://help.aliyun.com/zh/analyticdb/analyticdb-for-postgresql/support/create-an-accesskey-pair) ✅ | [Analyticdb 文档](https://help.aliyun.com/zh/analyticdb/analyticdb-for-postgresql/getting-started/create-an-instance-1) ✅ | [Analyticdb 文档](https://help.aliyun.com/zh/analyticdb/analyticdb-for-postgresql/getting-started/createa-a-privileged-account) ✅ | [https://www.notion.so/my-integrations](https://www.notion.so/my-integrations) ✅ | [no-reply@dify.ai](mailto:no-reply@dify.ai) ✅ | [no-reply@dify.ai](mailto:no-reply@dify.ai) ✅ | [工具](https://github.com/langgenius/dify/blob/main/api/core/tools/provider/_position.yaml) ❌ | [模型供应商](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/model_providers/_position.yaml) ✅ + * [常见问题](getting-started/install-self-hosted/faq.md) ✅ | [《环境变量说明:邮件相关配置》](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments#you-jian-xiang-guan-pei-zhi) ✅ | [本地部署相关](../../learn-more/faq/install-faq.md) ✅ +* [Dify Premium](getting-started/dify-premium.md) ✅ | [AWS AMI](https://docs.aws.amazon.com/zh\_cn/AWSEC2/latest/UserGuide/ec2-instances-and-amis.html) ❌ | [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) ✅ | [Dify Cloud](https://docs.dify.ai/v/zh-hans/getting-started/cloud) ✅ | [计划](https://dify.ai/pricing) ✅ | [此处](https://github.com/langgenius/dify/releases/tag/1.0.0) ✅ + +## 手册 + +* [接入大模型](guides/model-configuration/README.md) ✅ | [OpenAI](https://platform.openai.com/account/api-keys) ✅ | [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service/) ❌ | [Anthropic](https://console.anthropic.com/account/keys) ✅ | [讯飞星火](https://www.xfyun.cn/solutions/xinghuoAPI) ✅ | [文心一言](https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application) ✅ | [通义千问](https://dashscope.console.aliyun.com/api-key\_management?spm=a2c4g.11186623.0.0.3bbc424dxZms9k) ✅ | [Minimax](https://api.minimax.chat/user-center/basic-information/interface-key) ✅ | [Jina Embeddings](https://jina.ai/embeddings/) ✅ | [**Rerank 模型**](https://docs.dify.ai/v/zh-hans/advanced/retrieval-augment/rerank) ✅ | [Jina Reranker](https://jina.ai/reranker) ✅ | [PKCS1\_OAEP](https://pycryptodome.readthedocs.io/en/latest/src/cipher/oaep.html) ✅ | [Hugging Face](../../development/models-integration/hugging-face.md) ✅ | [Replicate](../../development/models-integration/replicate.md) ✅ | [Xinference](../../development/models-integration/xinference.md) ✅ | [OpenLLM](../../development/models-integration/openllm.md) ✅ + * [增加新供应商](guides/model-configuration/new-provider.md) ✅ | [Provider Schema](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md) ❌ | [AI Model Entity](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md#aimodelentity) ❌ | [`model_credential_schema`](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md) ❌ | [YAML 配置信息](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/schema.md) ❌ | [AnthropicProvider](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/model\_providers/anthropic/anthropic.py) ❌ | [**增加预定义模型** ](https://docs.dify.ai/v/zh-hans/guides/model-configuration/predefined-model) ✅ | [**增加自定义模型**](https://docs.dify.ai/v/zh-hans/guides/model-configuration/customizable-model) ✅ + * [预定义模型接入](guides/model-configuration/predefined-model.md) ✅ | [Interfaces](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/interfaces.md) ❌ | [llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py) ❌ + * [自定义模型接入](guides/model-configuration/customizable-model.md) ✅ | [Interfaces](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/docs/zh\_Hans/interfaces.md) ❌ | [llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py) ❌ + * [接口方法](guides/model-configuration/interfaces.md) ✅ | [[PromptMessage](#PromptMessage) ✅ | [UserPromptMessage](#UserPromptMessage) ✅ | [SystemPromptMessage](#SystemPromptMessage) ✅ | [UserPromptMessage](#UserPromptMessage) ✅ | [AssistantPromptMessage](#AssistantPromptMessage) ✅ | [ToolPromptMessage](#ToolPromptMessage) ✅ | [[PromptMessageTool](#PromptMessageTool) ✅ | [[LLMResultChunk](#LLMResultChunk) ✅ | [LLMResult](#LLMResult) ✅ | [[LLMResultChunk](#LLMResultChunk) ✅ | [LLMResult](#LLMResult) ✅ | [openai](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py) ❌ | [TextEmbeddingResult](#TextEmbeddingResult) ✅ | [RerankResult](#RerankResult) ✅ + * [配置规则](guides/model-configuration/schema.md) ✅ | [Provider](#Provider) ✅ | [AIModelEntity](#AIModelEntity) ✅ | [[ModelType](#ModelType) ✅ | [[ConfigurateMethod](#ConfigurateMethod) ✅ | [ProviderCredentialSchema](#ProviderCredentialSchema) ✅ | [ModelCredentialSchema](#ModelCredentialSchema) ✅ | [ModelType](#ModelType) ✅ | [[ModelFeature](#ModelFeature) ✅ | [LLMMode](#LLMMode) ✅ | [[ParameterRule](#ParameterRule) ✅ | [PriceConfig](#PriceConfig) ✅ | [[CredentialFormSchema](#CredentialFormSchema) ✅ | [[CredentialFormSchema](#CredentialFormSchema) ✅ | [FormType](#FormType) ✅ | [[FormOption](#FormOption) ✅ | [[FormShowOnObject](#FormShowOnObject) ✅ | [[FormShowOnObject](#FormShowOnObject) ✅ + * [负载均衡](guides/model-configuration/load-balancing.md) ✅ | [订阅 SaaS 付费服务](../../getting-started/cloud.md#ding-yue-ji-hua) ✅ +* [构建应用](guides/application-orchestrate/README.md) ✅ + * [创建应用](guides/application-orchestrate/creating-an-application.md) ✅ | [应用管理:导入](https://docs.dify.ai/zh-hans/guides/management/app-management#dao-ru-ying-yong) ✅ + * [聊天助手](guides/application-orchestrate/chatbot-application.md) ✅ | [知识库](../knowledge-base/) ✅ | [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support) ✅ | [Gemini 1.5 Pro](https://ai.google.dev/api/files) ✅ | [多模型调试](./multiple-llms-debugging.md) ✅ | [发布](https://docs.dify.ai/v/zh-hans/guides/application-publishing) ✅ | [WebApp 的模版](https://github.com/langgenius/webapp-conversation) ✅ | [Agent 类型](https://docs.dify.ai/v/zh-hans/guides/application-orchestrate/agent) ✅ | [在应用内集成知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/integrate-knowledge-within-application) ✅ + * [多模型调试](guides/application-orchestrate/multiple-llms-debugging.md) ✅ | [“增加新供应商”](https://docs.dify.ai/v/zh-hans/guides/model-configuration/new-provider) ✅ + * [Agent](guides/application-orchestrate/agent.md) ✅ | [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support) ✅ | [Gemini 1.5 Pro](https://ai.google.dev/api/files) ✅ + * [应用工具箱](guides/application-orchestrate/app-toolkits/README.md) ✅ | [应用](../#application_type) ✅ | [引用与归属](../../knowledge-base/retrieval-test-and-citation.md#id-2-yin-yong-yu-gui-shu) ✅ | [敏感内容审查](moderation-tool.md) ✅ | [标注回复](../../annotation/annotation-reply.md) ✅ + * [敏感内容审查](guides/application-orchestrate/app-toolkits/moderation-tool.md) ✅ | [platform.openai.com](https://platform.openai.com/docs/guides/moderation/overview) ✅ | [moderation.md](../../extension/api-based-extension/moderation.md "mention") ❌ +* [工作流](guides/workflow/README.md) ✅ + * [关键概念](guides/workflow/key-concept.md) ✅ | [节点说明](node/) ✅ | [《变量》](variables.md) ✅ | [End 节点](node/end.md) ✅ | [Answer 节点](node/answer.md) ✅ | [LLM](node/llm.md) ✅ | [问题分类](node/question-classifier.md) ✅ | [变量](key-concept.md#bian-liang) ✅ + * [变量](guides/workflow/variables.md) ✅ | [变量赋值](node/variable-assigner.md) ✅ | [变量赋值](node/variable-assigner.md) ✅ + * [节点说明](guides/workflow/node/README.md) ✅ + * [开始](guides/workflow/node/start.md) ✅ | [上传的文件](../file-upload.md) ✅ | [**系统变量**](../variables.md#xi-tong-bian-liang) ✅ | [文件上传](../file-upload.md) ✅ | [文件上传](../file-upload.md) ✅ | [外部数据工具](../../extension/api-based-extension/external-data-tool.md) ✅ + * [LLM](guides/workflow/node/llm.md) ✅ | [支持](../../../getting-started/readme/model-providers.md) ✅ | [模型配置](../../model-configuration/) ✅ | [知识检索](knowledge-retrieval.md) ✅ | [知识检索节点](knowledge-retrieval.md) ✅ | [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support) ✅ | [文件上传](https://docs.dify.ai/zh-hans/guides/workflow/file-upload) ✅ | [提示词专家模式(已下线)](../../../learn-more/extended-reading/prompt-engineering/prompt-engineering-1/) ✅ | [官方文档](https://jinja.palletsprojects.com/en/3.1.x/templates/) ✅ | [异常处理](https://docs.dify.ai/guides/workflow/error-handling) ✅ | [“知识库”](../../knowledge-base/) ✅ | [知识检索节点](knowledge-retrieval.md) ✅ | [**引用与归属**](../../knowledge-base/retrieval-test-and-citation.md#id-2-yin-yong-yu-gui-shu) ✅ | [文件上传](../file-upload.md) ✅ | [异常处理](https://docs.dify.ai/guides/workflow/error-handling) ✅ + * [知识检索](guides/workflow/node/knowledge-retrieval.md) ✅ | [基本概念](../../../learn-more/extended-reading/retrieval-augment/) ✅ | [创建](../../knowledge-base/create-knowledge-and-upload-documents/) ✅ | [在应用内集成知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/integrate-knowledge-within-application) ✅ | [召回模式](../../../learn-more/extended-reading/retrieval-augment/retrieval.md) ✅ | [**引用与归属**](../../knowledge-base/retrieval-test-and-citation.md#id-2-yin-yong-yu-gui-shu) ✅ + * [问题分类](guides/workflow/node/question-classifier.md) ✅ | [文件变量](https://docs.dify.ai/zh-hans/guides/workflow/variables) ✅ + * [条件分支](guides/workflow/node/ifelse.md) ✅ + * [代码执行](guides/workflow/node/code.md) ✅ | [介绍](code.md#介绍) ✅ | [使用场景](code.md#使用场景) ✅ | [本地部署](code.md#本地部署) ✅ | [安全策略](code.md#安全策略) ✅ | [变量引用](../key-concept.md#变量) ✅ | [这里](https://github.com/langgenius/dify/tree/main/docker/docker-compose.middleware.yaml) ✅ | [这里](https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command) ✅ | [异常处理](https://docs.dify.ai/guides/workflow/error-handling) ✅ + * [模板转换](guides/workflow/node/template.md) ✅ | [https://jinja.palletsprojects.com/en/3.1.x/](https://jinja.palletsprojects.com/en/3.1.x/) ✅ | [官方文档](https://jinja.palletsprojects.com/en/3.1.x/templates/) ✅ + * [文档提取器](guides/workflow/node/doc-extractor.md) ✅ | [list-operator.md](list-operator.md "mention") ❌ | [“开始”](start.md) ✅ | [附加功能](../additional-features.md) ✅ + * [列表操作](guides/workflow/node/list-operator.md) ✅ | [MIME 类型](https://datatracker.ietf.org/doc/html/rfc2046) ✅ | [列表操作](list-operator.md) ✅ | [Features](../additional-features.md) ✅ + * [变量聚合](guides/workflow/node/variable-aggregator.md) ✅ + * [变量赋值](guides/workflow/node/variable-assigner.md) ✅ | [会话变量](../key-concept.md#hui-hua-bian-liang) ✅ + * [迭代](guides/workflow/node/iteration.md) ✅ | [扩展阅读:数组](../../../learn-more/extended-reading/what-is-array-variable.md) ✅ | [扩展阅读:如何将数组转换为文本](iteration.md#ru-he-jiang-shu-zu-zhuan-huan-wei-wen-ben) ✅ | [**什么是数组变量?**](../../../learn-more/extended-reading/what-is-array-variable.md) ✅ | [代码节点](code.md) ✅ | [参数提取](parameter-extractor.md) ✅ | [知识库检索](knowledge-retrieval.md) ✅ | [迭代](iteration.md) ✅ | [工具](tools.md) ✅ | [HTTP 请求](http-request.md) ✅ + * [参数提取](guides/workflow/node/parameter-extractor.md) ✅ | [工具](https://docs.dify.ai/v/zh-hans/guides/tools) ✅ | [迭代](iteration.md#id-1-ding-yi) ✅ | [结构化参数的转换](iteration.md#id-2-chang-jing) ✅ | [迭代节点](iteration.md) ✅ | [**HTTP 请求**](http-request.md) ✅ + * [HTTP 请求](guides/workflow/node/http-request.md) ✅ | [异常处理](https://docs.dify.ai/zh-hans/guides/workflow/error-handling) ✅ + * [Agent](guides/workflow/node/agent.md) ✅ | [公开仓库](https://github.com/langgenius/dify-plugins) ✅ + * [工具](guides/workflow/node/tools.md) ✅ | [OpenAPI/Swagger 标准格式](https://swagger.io/specification/) ✅ | [工具配置说明](https://docs.dify.ai/v/zh-hans/guides/tools) ✅ | [变量](https://docs.dify.ai/v/zh-hans/guides/workflow/variables) ✅ | [异常处理](https://docs.dify.ai/zh-hans/guides/workflow/error-handling) ✅ | [工具配置说明](https://docs.dify.ai/v/zh-hans/guides/tools) ✅ + * [结束](guides/workflow/node/end.md) ✅ | [长故事生成工作流](iteration.md#shi-li-2-chang-wen-zhang-die-dai-sheng-cheng-qi-ling-yi-zhong-bian-pai-fang-shi) ✅ + * [直接回复](guides/workflow/node/answer.md) ✅ + * [循环](guides/workflow/node/loop.md) ✅ + * [快捷键](guides/workflow/shortcut-key.md) ✅ + * [编排节点](guides/workflow/orchestrate-node.md) ✅ + * [文件上传](guides/workflow/file-upload.md) ✅ | [ChatFlow](key-concept.md#chatflow-he-workflow) ✅ | [WorkFlow](key-concept.md#chatflow-he-workflow) ✅ | [变量](variables.md) ✅ | ["开始节点"](node/start.md) ✅ | ["附加功能"](additional-features.md) ✅ | ["开始节点"](node/start.md) ✅ | [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support) ✅ | [**文档提取器**](node/doc-extractor.md) ✅ | [外部工具](../tools/advanced-tool-integration.md) ✅ | [文档提取器](node/doc-extractor.md) ✅ | [“开始”](node/start.md) ✅ | [**“文档提取器”**](node/doc-extractor.md) ✅ | [列表操作](node/list-operator.md) ✅ | [动手实验室 - 使用文件上传搭建文章理解助手](../../workshop/intermediate/article-reader.md) ✅ + * [异常处理](guides/workflow/error-handling/README.md) ✅ | [LLM](../node/llm.md) ✅ | [HTTP](../node/http-request.md) ✅ | [代码](../node/code.md) ✅ | [工具](../node/tools.md) ✅ | [预定义异常处理逻辑](predefined-nodes-failure-logic.md) ✅ | [下载地址](https://assets-docs.dify.ai/2024/12/087861aa20e06bb4f8a2bef7e7ae0522.yml) ✅ + * [预定义异常处理逻辑](guides/workflow/error-handling/predefined-nodes-failure-logic.md) ✅ | [LLM](../node/llm.md) ✅ | [HTTP](../node/http-request.md) ✅ | [代码](../node/code.md) ✅ | [工具](../node/tools.md) ✅ | [错误类型](https://docs.dify.ai/guides/workflow/error-handling/error-type) ✅ + * [错误类型](guides/workflow/error-handling/error-type.md) ✅ | [代码节点](https://docs.dify.ai/guides/workflow/node/code) ✅ | [LLM 节点](https://docs.dify.ai/guides/workflow/node/llm) ✅ | [上下文](https://docs.dify.ai/guides/workflow/node/llm#explanation-of-special-variables) ✅ | [此文档](https://docs.dify.ai/guides/tools/tool-configuration) ✅ | [HTTP 节点](https://docs.dify.ai/guides/workflow/node/http-request) ✅ + * [附加功能](guides/workflow/additional-features.md) ✅ | [模型供应商](../../getting-started/readme/model-providers.md) ✅ | [“知识检索”](node/knowledge-retrieval.md) ✅ | [敏感内容审查](../application-orchestrate/app-toolkits/moderation-tool.md) ✅ | [文档提取器](node/doc-extractor.md) ✅ | [文档提取器](node/doc-extractor.md) ✅ | [《文件上传:在开始节点添加变量》](file-upload.md#fang-fa-er-zai-tian-jia-wen-jian-bian-liang) ✅ | [列表操作](node/list-operator.md) ✅ | [外部数据工具](../extension/api-based-extension/external-data-tool.md) ✅ + * [预览与调试](guides/workflow/debug-and-preview/README.md) ✅ + * [预览与运行](guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md) ✅ + * [单步调试](guides/workflow/debug-and-preview/step-run.md) ✅ + * [对话/运行日志](guides/workflow/debug-and-preview/log.md) ✅ + * [检查清单](guides/workflow/debug-and-preview/checklist.md) ✅ + * [运行历史](guides/workflow/debug-and-preview/history.md) ✅ + * [应用发布](guides/workflow/publish.md) ✅ | [版本管理](https://docs.dify.ai/zh-hans/guides/management/version-control) ✅ + * [变更公告:图片上传被替换为文件上传](guides/workflow/bulletin.md) ✅ | [文件上传](file-upload.md) ✅ | [附加功能](additional-features.md) ✅ | [变量](variables.md) ✅ | [附加功能](additional-features.md) ✅ | [开始](node/start.md) ✅ | [附加功能](additional-features.md) ✅ | [开始](node/start.md) ✅ | [GitHub](https://github.com/langgenius/dify) ✅ | [Discord 频道](https://discord.gg/FngNHpbcY7) ✅ +* [知识库](guides/knowledge-base/README.md) ✅ | [RAG 管线](../../learn-more/extended-reading/retrieval-augment/) ✅ | [连接外部知识库](connect-external-knowledge-base.md) ✅ | [连接外部知识库](connect-external-knowledge-base.md) ✅ + * [创建知识库](guides/knowledge-base/create-knowledge-and-upload-documents/README.md) ✅ | [import-content-data](import-content-data/) ✅ | [chunking-and-cleaning-text.md](chunking-and-cleaning-text.md) ✅ | [setting-indexing-methods.md](setting-indexing-methods.md) ✅ | [在应用内集成知识库](../integrate-knowledge-within-application.md) ✅ | [知识库管理与文档维护](../knowledge-and-documents-maintenance/) ✅ | [ ](https://docs.unstructured.io/welcome) ✅ | [**Unstructured ETL** ](https://unstructured.io/) ✅ | [环境变量](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi) ✅ | [官方文档](https://docs.unstructured.io/open-source/core-functionality/partitioning) ✅ | [《Dify:Embedding 技术与 Dify 知识库设计/规划》](https://mp.weixin.qq.com/s/vmY_CUmETo2IpEBf1nEGLQ) ✅ | [元数据](https://docs.dify.ai/zh-hans/guides/knowledge-base/metadata) ✅ + * [1. 导入文本数据](guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/README.md) ✅ | [订阅计划](https://dify.ai/pricing) ✅ | [订阅计划](https://dify.ai/pricing) ✅ | [sync-from-notion.md](sync-from-notion.md) ✅ | [sync-from-website.md](sync-from-website.md) ✅ + * [1.1 从 Notion 导入数据](guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-notion.md) ✅ | [Notion 官方文档](https://developers.notion.com/docs/authorization) ✅ | [创建集成](https://www.notion.so/my-integrations) ✅ + * [1.2 从网页导入数据](guides/knowledge-base/create-knowledge-and-upload-documents/import-content-data/sync-from-website.md) ✅ | [Jina Reader](https://jina.ai/reader) ✅ | [Firecrawl ](https://www.firecrawl.dev/) ✅ | [Firecrawl 官网](https://www.firecrawl.dev/) ✅ | [ Jina Reader 官网](https://jina.ai/reader) ✅ + * [2. 指定分段模式](guides/knowledge-base/create-knowledge-and-upload-documents/chunking-and-cleaning-text.md) ✅ | [ETL](chunking-and-cleaning-text.md#etl) ✅ | [正则表达式语法](https://regexr.com/) ✅ | [设定索引方法](setting-indexing-methods.md) ✅ | [正则表达式语法](https://regexr.com/) ✅ | [正则表达式语法](https://regexr.com/) ✅ | [“高质量索引”](chunking-and-cleaning-text.md#gao-zhi-liang-suo-yin) ✅ | [setting-indexing-methods.md](setting-indexing-methods.md) ✅ + * [3. 设定索引方法与检索设置](guides/knowledge-base/create-knowledge-and-upload-documents/setting-indexing-methods.md) ✅ | [检索设置](setting-indexing-methods.md#retrieval_settings) ✅ | [《Embedding 技术与 Dify》](https://mp.weixin.qq.com/s/vmY_CUmETo2IpEBf1nEGLQ) ✅ | [**社区版**](../../../getting-started/install-self-hosted/) ✅ | [**经济型索引方法**](setting-indexing-methods.md#jing-ji) ✅ | [下文](setting-indexing-methods.md#dao-pai-suo-yin) ✅ | [《Dify:Embedding 技术与 Dify 知识库设计/规划》](https://mp.weixin.qq.com/s/vmY_CUmETo2IpEBf1nEGLQ) ✅ | [retrieval-test-and-citation.md](../retrieval-test-and-citation.md) ✅ + * [管理知识库](guides/knowledge-base/knowledge-and-documents-maintenance/README.md) ✅ | [索引方法文档](../create-knowledge-and-upload-documents/setting-indexing-methods.md) ✅ | [检索设置文档](../create-knowledge-and-upload-documents/setting-indexing-methods.md) ✅ | [maintain-knowledge-documents.md](maintain-knowledge-documents.md) ✅ | [maintain-dataset-via-api.md](maintain-dataset-via-api.md) ✅ + * [维护知识库内文档](guides/knowledge-base/knowledge-and-documents-maintenance/maintain-knowledge-documents.md) ✅ | [文本分段模式](../create-knowledge-and-upload-documents/chunking-and-cleaning-text.md) ✅ | [通用模式](../create-knowledge-and-upload-documents/#tong-yong) ✅ | [父子模式](../create-knowledge-and-upload-documents/#fu-zi-fen-duan) ✅ | [此处](https://dify.ai/pricing) ✅ | [元数据](https://docs.dify.ai/zh-hans/guides/knowledge-base/metadata) ✅ + * [通过 API 维护知识库](guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md) ✅ + * [元数据](guides/knowledge-base/metadata.md) ✅ | [在应用内集成知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/integrate-knowledge-within-application) ✅ | [通过 API 维护知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api) ✅ + * [在应用内集成知识库](guides/knowledge-base/integrate-knowledge-within-application.md) ✅ | [所有应用类型](../application-orchestrate/#application_type) ✅ | [Rerank 策略](https://docs.dify.ai/v/zh-hans/learn-more/extended-reading/retrieval-augment/rerank) ✅ | [《Dify:Embedding 技术与 Dify 知识库设计/规划》](https://mp.weixin.qq.com/s/vmY_CUmETo2IpEBf1nEGLQ) ✅ | [重排序](https://docs.dify.ai/v/zh-hans/learn-more/extended-reading/retrieval-augment/rerank) ✅ + * [召回测试/引用归属](guides/knowledge-base/retrieval-test-and-citation.md) ✅ | [检索增强生成(RAG)](../../learn-more/extended-reading/retrieval-augment/) ✅ + * [知识库请求频率限制](guides/knowledge-base/knowledge-request-rate-limit.md) ✅ + * [连接外部知识库](guides/knowledge-base/connect-external-knowledge-base.md) ✅ | [AWS Bedrock](https://aws.amazon.com/bedrock/) ✅ | [外部知识库 API 规范。](external-knowledge-api-documentation.md) ✅ | [外部知识库 API](https://docs.dify.ai/zh-hans/guides/knowledge-base/external-knowledge-api-documentation) ✅ | [外部知识库 API](https://docs.dify.ai/zh-hans/guides/knowledge-base/external-knowledge-api-documentation) ✅ | [how-to-connect-aws-bedrock.md](../../learn-more/use-cases/how-to-connect-aws-bedrock.md "mention") ❌ + * [外部知识库 API](guides/knowledge-base/external-knowledge-api-documentation.md) ✅ | [连接外部知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/connect-external-knowledge-base) ✅ +* [工具](guides/tools/README.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [Dify 开发贡献文档](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) ✅ | [官方文档说明](https://swagger.io/specification/) ✅ | [dify-tools-worker](https://github.com/crazywoola/dify-tools-worker) ✅ + * [快速接入工具](guides/tools/quick-tool-integration.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/develop-plugins) ✅ + * [高级接入工具](guides/tools/advanced-tool-integration.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/develop-plugins) ✅ | [快速接入](https://docs.dify.ai/v/zh-hans/guides/tools/quick-tool-integration) ✅ + * [工具配置](guides/tools/tool-configuration/README.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [StableDiffusion](./stable-diffusion.md) ✅ | [SearXNG](./searxng.md) ✅ + * [Google](guides/tools/tool-configuration/google.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [Serp 平台](https://serpapi.com/dashboard) ✅ + * [Bing](guides/tools/tool-configuration/bing.md) ✅ | [プラグイン開発](https://docs.dify.ai/ja-jp/plugins/quick-start/install-plugins) ✅ | [Azure 平台](https://platform.openai.com/) ✅ + * [SearchApi](guides/tools/tool-configuration/searchapi.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [SearchApi](https://www.searchapi.io/) ✅ + * [StableDiffusion](guides/tools/tool-configuration/stable-diffusion.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [官方仓库](https://github.com/AUTOMATIC1111/stable-diffusion-webui) ✅ | [pastel-mix](https://huggingface.co/JamesFlare/pastel-mix) ✅ | [变量](https://docs.dify.ai/v/zh-hans/guides/workflow/variables) ✅ + * [Dall-e](guides/tools/tool-configuration/dall-e.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [OpenAI Platform](https://platform.openai.com/) ✅ | [变量](https://docs.dify.ai/v/zh-hans/guides/workflow/variables) ✅ + * [Perplexity Search](guides/tools/tool-configuration/perplexity.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [Perplexity](https://www.perplexity.ai/settings/api) ❌ + * [AlphaVantage 股票分析](guides/tools/tool-configuration/alphavantage.md) ✅ | [@zhuhao](https://github.com/hwzhuhao) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [AlphaVantage](https://www.alphavantage.co/support/#api-key) ✅ | [变量](https://docs.dify.ai/v/zh-hans/guides/workflow/variables) ✅ + * [Youtube](guides/tools/tool-configuration/youtube.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [Youtube](https://www.youtube.com/) ✅ | [Google 凭据网站](https://console.cloud.google.com/apis/credentials) ✅ | [Dify 工具页](https://cloud.dify.ai/tools) ❌ + * [SearXNG](guides/tools/tool-configuration/searxng.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [社区版](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) ✅ | [ SearXNG 安装文档](https://docs.searxng.org/admin/installation.html) ✅ | [这里](https://docs.searxng.org/admin/settings/index.html) ✅ + * [Serper](guides/tools/tool-configuration/serper.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [Serper 平台](https://serper.dev/signup) ✅ + * [SiliconFlow (支持 Flux 绘图)](guides/tools/tool-configuration/siliconflow.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [SiliconCloud API 管理页面](https://cloud.siliconflow.cn/account/ak) ✅ | [变量](../../workflow/variables.md) ✅ + * [ComfyUI](guides/tools/tool-configuration/comfyui.md) ✅ | [插件开发](https://docs.dify.ai/zh-hans/plugins/quick-start/install-plugins) ✅ | [ComfyUI](https://www.comfy.org/) ✅ | [官方文档](https://docs.comfy.org/get_started/gettingstarted) ✅ +* [发布](guides/application-publishing/README.md) ✅ | [launch-your-webapp-quickly](launch-your-webapp-quickly/) ✅ | [embedding-in-websites.md](embedding-in-websites.md) ✅ | [developing-with-apis.md](developing-with-apis.md) ✅ | [based-on-frontend-templates.md](based-on-frontend-templates.md) ✅ + * [发布为公开 Web 站点](guides/application-publishing/launch-your-webapp-quickly/README.md) ✅ | [https://udify.app/](https://udify.app/) ❌ | [前往预览](text-generator.md) ✅ | [前往预览](conversation-application.md) ✅ | [ 寻求支持](../../../community/support.md) ✅ | [《嵌入网站》](https://docs.dify.ai/v/zh-hans/guides/application-publishing/embedding-in-websites) ✅ + * [Web 应用的设置](guides/application-publishing/launch-your-webapp-quickly/web-app-settings.md) ✅ + * [文本生成型应用](guides/application-publishing/launch-your-webapp-quickly/text-generator.md) ✅ + * [对话型应用](guides/application-publishing/launch-your-webapp-quickly/conversation-application.md) ✅ | [《引用与归属》](https://docs.dify.ai/v/zh-hans/guides/knowledge-base/retrieval-test-and-citation#id-2-yin-yong-yu-gui-shu) ✅ + * [嵌入网站](guides/application-publishing/embedding-in-websites.md) ✅ | [https://dev.udify.app](https://dev.udify.app) ❌ | [https://udify.app](https://udify.app) ❌ + * [基于 APIs 开发](guides/application-publishing/developing-with-apis.md) ✅ + * [基于前端组件再开发](guides/application-publishing/based-on-frontend-templates.md) ✅ | [对话型应用](https://github.com/langgenius/webapp-conversation) ✅ | [文本生成型应用](https://github.com/langgenius/webapp-text-generator) ✅ +* [标注](guides/annotation/README.md) ✅ + * [日志与标注](guides/annotation/logs.md) ✅ | [价格页](https://dify.ai/pricing) ✅ | [社区版](https://docs.dify.ai/zh-hans/getting-started/install-self-hosted/docker-compose) ✅ + * [标注回复](guides/annotation/annotation-reply.md) ✅ +* [监测](guides/monitoring/README.md) ✅ + * [集成外部 Ops 工具](guides/monitoring/integrate-external-ops-tools/README.md) ✅ + * [集成 LangSmith](guides/monitoring/integrate-external-ops-tools/integrate-langsmith.md) ✅ | [https://www.langchain.com/langsmith](https://www.langchain.com/langsmith) ✅ | [LangSmith](https://www.langchain.com/langsmith) ✅ + * [集成 Langfuse](guides/monitoring/integrate-external-ops-tools/integrate-langfuse.md) ✅ | [https://langfuse.com/](https://langfuse.com/) ✅ | [官网注册](https://langfuse.com/) ✅ + * [集成 Opik](guides/monitoring/integrate-external-ops-tools/integrate-opik.md) ✅ | [Opik](https://www.comet.com/site/products/opik/) ✅ | [Opik](https://www.comet.com/signup?from=llm) ✅ + * [数据分析](guides/monitoring/analysis.md) ✅ +* [扩展](guides/extension/README.md) ✅ + * [API 扩展](guides/extension/api-based-extension/README.md) ✅ | [Ngrok](https://ngrok.com) ✅ | [https://ngrok.com](https://ngrok.com) ✅ | [外部数据工具](../../knowledge-base/external-data-tool.md "mention") ❌ | [cloudflare-workers.md](cloudflare-workers.md "mention") ❌ + * [使用 Cloudflare Workers 部署 API Tools](guides/extension/api-based-extension/cloudflare-workers.md) ✅ | [Example GitHub Repository](https://github.com/crazywoola/dify-extension-workers) ✅ | [Cloudflare Workers](https://workers.cloudflare.com/) ✅ | [Cloudflare Workers CLI](https://developers.cloudflare.com/workers/cli-wrangler/install-update) ✅ | [Example GitHub Repository](https://github.com/crazywoola/dify-extension-workers) ✅ + * [敏感内容审查](guides/extension/api-based-extension/moderation.md) ✅ + * [代码扩展](guides/extension/code-based-extension/README.md) ✅ | [外部数据工具](external-data-tool.md "mention") ❌ | [敏感内容审核](moderation.md "mention") ❌ + * [外部数据工具](guides/extension/code-based-extension/external-data-tool.md) ✅ | [api-based-extension](../api-based-extension/ "mention") ❌ | [前端组件规范](https://docs.dify.ai/zh-hans/guides/extension/code-based-extension "mention") ✅ + * [敏感内容审查](guides/extension/code-based-extension/moderation.md) ✅ | [前端组件规范](https://docs.dify.ai/zh-hans/guides/extension/code-based-extension "mention") ✅ +* [协同](guides/workspace/README.md) ✅ | [发现](app.md) ✅ + * [发现](guides/workspace/app.md) ✅ + * [邀请与管理成员](guides/workspace/invite-and-manage-members.md) ✅ +* [管理](guides/management/README.md) ✅ + * [应用管理](guides/management/app-management.md) ✅ | [更新 Dify](https://docs.dify.ai/zh-hans/getting-started/install-self-hosted/docker-compose#geng-xin-dify) ✅ + * [团队成员管理](guides/management/team-members-management.md) ✅ | [环境变量](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments) ✅ + * [个人账号管理](guides/management/personal-account-management.md) ✅ | [ Github 代码仓库](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) ✅ + * [订阅管理](guides/management/subscription-management.md) ✅ | [标记](https://docs.dify.ai/v/zh-hans/guides/biao-zhu/logs) ✅ | [标记](https://docs.dify.ai/v/zh-hans/guides/biao-zhu/logs) ✅ | [Dify 定价页](https://dify.ai/pricing) ✅ + * [版本管理](guides/management/version-control.md) ✅ + +## 动手实验室 + +* [初级](workshop/basic/README.md) ✅ + * [如何搭建 AI 图片生成应用](workshop/basic/build-ai-image-generation-app.md) ✅ | [点击这里](https://platform.stability.ai/account/keys) ✅ | [Dify - 工具 - Stability](https://cloud.dify.ai/tools) ❌ | [groq API 管理页](https://console.groq.com/keys) ✅ + * [AI Agent 实战:搭建个人在线旅游助手](workshop/basic/travel-assistant.md) ✅ | [如何搭建 AI 图片生成应用](build-ai-image-generation-app.md) ✅ | [Dify](https://dify.ai) ✅ | [社区版 - Docker Compose 部署](../../getting-started/install-self-hosted/docker-compose.md) ✅ | [必应](https://docs.dify.ai/zh-hans/guides/tools/tool-configuration/bing) ✅ | [Perplexity](https://docs.dify.ai/zh-hans/guides/tools/tool-configuration/perplexity) ✅ | [SerpAPI - API Key](https://serpapi.com/manage-api-key) ✅ +* [中级](workshop/intermediate/README.md) ✅ + * [使用文件上传搭建文章理解助手](workshop/intermediate/article-reader.md) ✅ | [文件上传](../../guides/workflow/file-upload.md) ✅ + * [使用知识库搭建智能客服机器人](workshop/intermediate/customer-service-bot.md) ✅ | [Dify 的帮助文档](https://docs.dify.ai) ✅ | [帮助文档](https://docs.dify.ai) ✅ | [帮助文档](../../guides/workflow/variables.md#xi-tong-bian-liang) ✅ | [如何连接 AWS Bedrock 知识库?](../../learn-more/use-cases/how-to-connect-aws-bedrock.md) ✅ + * [ChatFlow 实战:搭建 Twitter 账号分析助手](workshop/intermediate/twitter-chatflow.md) ✅ | [February 2, 2023](https://twitter.com/XDevelopers/status/1621026986784337922?ref\_src=twsrc%5Etfw) ❌ | [crawlbase.com](https://crawlbase.com) ✅ | [Dify](https://cloud.dify.ai/) ❌ | [云服务](https://cloud.dify.ai/) ❌ | [docker compose 本地](https://docs.dify.ai/getting-started/install-self-hosted) ✅ | [Crawlbase文档](https://crawlbase.com/docs/crawling-api/scrapers/#twitter-profile) ✅ | [`https%3A%2F%2Ftwitter.com%2Felonmusk`](https://twitter.com/elonmusk) ❌ | [Crawlbase文档](https://crawlbase.com/docs/crawling-api/scrapers/#twitter-profile) ✅ | [此处](https://crawlbase.com/dashboard/account/docs) ✅ | [X@dify\_ai](https://x.com/dify\_ai) ❌ | [GitHub 仓库](https://github.com/langgenius/dify) ✅ + +## 社区 + +* [寻求支持](community/support.md) ✅ | [Github](https://github.com/langgenius/dify) ✅ | [Discord ](https://discord.gg/8Tpq4AcN9c) ✅ | [hello@dify.ai](mailto:hello@dify.ai) ✅ +* [成为贡献者](community/contribution.md) ✅ | [许可证和贡献者协议](https://github.com/langgenius/dify/blob/main/LICENSE) ✅ | [行为准则](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md) ✅ | [查找](https://github.com/langgenius/dify/issues?q=is:issue+is:closed) ✅ | [创建](https://github.com/langgenius/dify/issues/new/choose) ✅ | [@perzeusss](https://github.com/perzeuss) ✅ | [功能请求助手](https://udify.app/chat/MK2kVSnw1gakVwMX) ✅ | [@yeuoly](https://github.com/Yeuoly) ✅ | [@jyong](https://github.com/JohnJyong) ✅ | [@GarfieldDai](https://github.com/GarfieldDai) ✅ | [@iamjoel](https://github.com/iamjoel) ✅ | [@zxhlyh](https://github.com/zxhlyh) ✅ | [@guchenhe](https://github.com/guchenhe) ✅ | [@crazywoola](https://github.com/crazywoola) ✅ | [@takatost](https://github.com/takatost) ✅ | [community feedback board](https://github.com/langgenius/dify/discussions/categories/ideas) ✅ | [Docker](https://www.docker.com/) ✅ | [Docker Compose](https://docs.docker.com/compose/install/) ✅ | [Node.js v18.x (LTS)](http://nodejs.org) ✅ | [npm](https://www.npmjs.com/) ✅ | [Yarn](https://yarnpkg.com/) ✅ | [Python](https://www.python.org/) ✅ | [后端 README](https://github.com/langgenius/dify/blob/main/api/README.md) ✅ | [前端 README](https://github.com/langgenius/dify/blob/main/web/README.md) ✅ | [安装常见问题解答](https://docs.dify.ai/v/zh-hans/learn-more/faq/install-faq) ✅ | [http://localhost:3000](http://localhost:3000) ✅ | [此指南](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/README.md) ✅ | [此指南](https://github.com/langgenius/dify/blob/main/api/core/tools/README_CN.md) ❌ | [Dify-docs](https://github.com/langgenius/dify-docs/tree/main/en/guides/tools/tool-configuration) ✅ | [Flask](https://flask.palletsprojects.com/en/3.0.x/) ✅ | [SQLAlchemy](https://www.sqlalchemy.org/) ✅ | [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html) ✅ | [Next.js](https://nextjs.org/) ✅ | [Tailwind CSS](https://tailwindcss.com/) ✅ | [React-i18next](https://react.i18next.com/) ✅ | [GitHub 的拉取请求教程](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests) ✅ | [README](https://github.com/langgenius/dify/blob/main/README_CN.md) ✅ | [Discord](https://discord.com/invite/8Tpq4AcN9c) ✅ +* [为 Dify 文档做出贡献](community/docs-contribution.md) ✅ | [开源项目](https://github.com/langgenius/dify-docs) ✅ | [Issues 页](https://github.com/langgenius/dify-docs/issues) ✅ | [Discord](https://discord.com/invite/8Tpq4AcN9c) ✅ + +## 插件 + +* [功能简介](plugins/introduction.md) ✅ | [Dify Marketplace](https://marketplace.dify.ai/) ✅ | [GitHub](publish-plugins/publish-plugin-on-personal-github-repo.md) ✅ | [本地文件](publish-plugins/package-and-publish-plugin-file.md) ❌ | [模型服务商](quick-start/develop-plugins/model-plugin/integrate-the-predefined-model.md) ✅ | [自定义模型](quick-start/develop-plugins/model-plugin/customizable-model.md) ✅ | [快速开始: Model 插件](quick-start/develop-plugins/model-plugin/) ✅ | [快速开始:Tool 插件](quick-start/develop-plugins/tool-plugin.md) ✅ | [Agent 节点](../guides/workflow/node/agent.md) ✅ | [快速开始: Agent 策略插件](quick-start/develop-plugins/agent-strategy-plugin.md) ✅ | [快速开始:Extension 插件](quick-start/develop-plugins/extension-plugin.md) ✅ | [插件开发:Bundle 插件](quick-start/develop-plugins/bundle.md) ✅ | [Dify Marketplace](https://marketplace.dify.ai/) ✅ | [install-plugins.md](quick-start/install-plugins.md) ✅ | [develop-plugins](quick-start/develop-plugins/) ✅ | [Dify Marketplace](https://marketplace.dify.ai/) ✅ | [GitHub 仓库](https://github.com/langgenius/dify-plugins) ✅ | [publish-to-dify-marketplace](publish-plugins/publish-to-dify-marketplace/) ✅ | [publish-plugin-on-personal-github-repo.md](publish-plugins/publish-plugin-on-personal-github-repo.md) ✅ | [package-plugin-file-and-publish.md](publish-plugins/package-plugin-file-and-publish.md) ✅ +* [快速开始](plugins/quick-start/README.md) ✅ | [install-plugins.md](install-plugins.md) ✅ | [tool-plugin.md](develop-plugins/tool-plugin.md) ✅ | [extension-plugin.md](develop-plugins/extension-plugin.md) ✅ | [model-plugin](develop-plugins/model-plugin/) ✅ | [bundle.md](develop-plugins/bundle.md) ✅ | [agent-strategy-plugin.md](develop-plugins/agent-strategy-plugin.md) ✅ | [schema-definition](../schema-definition/) ✅ + * [安装与使用插件](plugins/quick-start/install-plugins.md) ✅ | [发布插件:GitHub](../publish-plugins/publish-plugin-on-personal-github-repo.md) ✅ | [打包插件](../publish-plugins/package-plugin-file-and-publish.md) ✅ | [develop-plugins](develop-plugins/) ✅ + * [插件开发](plugins/quick-start/develop-plugins/README.md) ✅ | [initialize-development-tools.md](initialize-development-tools.md) ✅ | [tool-plugin.md](tool-plugin.md) ✅ | [model-plugin](model-plugin/) ✅ | [extension-plugin.md](extension-plugin.md) ✅ | [通用结构标准定义](../../schema-definition/general-specifications.md) ✅ | [Manifest 标准定义](../../schema-definition/manifest.md) ✅ | [工具接入标准定义](../../schema-definition/tool.md) ✅ | [模型接入简介](../../schema-definition/model/) ✅ | [Endpoint 标准定义](../../schema-definition/endpoint.md) ✅ | [扩展 Agent 策略](../../schema-definition/agent.md) ✅ | [app.md](../../schema-definition/reverse-invocation-of-the-dify-service/app.md "mention") ❌ | [model.md](../../schema-definition/reverse-invocation-of-the-dify-service/model.md "mention") ❌ | [node.md](../../schema-definition/reverse-invocation-of-the-dify-service/node.md "mention") ❌ | [tool.md](../../schema-definition/reverse-invocation-of-the-dify-service/tool.md "mention") ❌ | [插件持久化存储能力](../../schema-definition/persistent-storage.md) ✅ | [Marketplace 发布指南](../../publish-plugins/publish-to-dify-marketplace/) ✅ | [GitHub 发布指南](../../publish-plugins/publish-plugin-on-personal-github-repo.md) ✅ + * [初始化开发工具](plugins/quick-start/develop-plugins/initialize-development-tools.md) ✅ | [Dify Plugin CLI](https://github.com/langgenius/dify-plugin-daemon/releases) ✅ | [Python 安装教程](https://pythontest.com/python/installing-python-3-11/) ✅ | [tool-plugin.md](tool-plugin.md) ✅ | [model-plugin](model-plugin/) ✅ | [agent-strategy-plugin.md](agent-strategy-plugin.md) ✅ | [extension-plugin.md](extension-plugin.md) ✅ | [bundle.md](bundle.md) ✅ + * [Tool 插件](plugins/quick-start/develop-plugins/tool-plugin.md) ✅ | [初始化开发工具](initialize-development-tools.md) ✅ | [接口文档](../../schema-definition/) ✅ | [ProviderConfig](../../schema-definition/general-specifications.md#providerconfig) ✅ | [工具接口文档](../../schema-definition/tool.md) ✅ | [“插件管理”](https://cloud.dify.ai/plugins) ❌ | [插件发布规范](https://docs.dify.ai/zh-hans/plugins/publish-plugins/publish-to-dify-marketplace) ✅ | [Dify Marketplace](https://marketplace.dify.ai/) ✅ | [publish-plugins](../../publish-plugins/) ✅ | [开发 Extension 插件](extension-plugin.md) ✅ | [开发 Model 插件](model-plugin/) ✅ | [Bundle 插件:将多个插件打包](bundle.md) ✅ | [Manifest](../../schema-definition/manifest.md) ✅ | [Endpoint](../../schema-definition/endpoint.md) ✅ | [反向调用 Dify 能力](../../schema-definition/reverse-invocation-of-the-dify-service/) ✅ | [工具](../../schema-definition/tool.md) ✅ | [模型](../../schema-definition/model/) ✅ | [扩展 Agent 策略](../../schema-definition/agent.md) ✅ + * [Model 插件](plugins/quick-start/develop-plugins/model-plugin/README.md) ✅ | [创建模型供应商](create-model-providers.md) ✅ | [预定义](../../../../guides/model-configuration/predefined-model.md) ✅ | [自定义](customizable-model.md) ✅ | [调试插件](../../debug-plugins.md) ❌ + * [创建模型供应商](plugins/quick-start/develop-plugins/model-plugin/create-model-providers.md) ✅ | [初始化开发工具](../initialize-development-tools.md) ✅ | [模型接口文档](../../../schema-definition/model/model-schema.md) ✅ | [接入预定义模型](../../../../guides/model-configuration/predefined-model.md) ✅ | [接入自定义模型](../../../../guides/model-configuration/customizable-model.md) ✅ + * [接入预定义模型](plugins/quick-start/develop-plugins/model-plugin/integrate-the-predefined-model.md) ✅ | [模型供应商](create-model-providers.md) ✅ | [AIModelEntity](../../../schema-definition/model/model-designing-rules.md#aimodelentity) ✅ | [GitHub 代码仓库](https://github.com/langgenius/dify-official-plugins/tree/main/models/anthropic/models/llm) ✅ | [模型设计规则](../../../schema-definition/model/model-designing-rules.md) ✅ | [Github 代码仓库](https://github.com/langgenius/dify-official-plugins/tree/main/models) ✅ | [GitHub 代码仓库](https://github.com/langgenius/dify-official-plugins/blob/main/models/anthropic/models/llm/llm.py) ✅ | [AIModelEntity](../../../schema-definition/model/model-designing-rules.md#aimodelentity) ✅ | [Dify Plugins 代码仓库](https://github.com/langgenius/dify-plugins) ✅ | [插件发布规范](https://docs.dify.ai/zh-hans/plugins/publish-plugins/publish-to-dify-marketplace) ✅ | [Dify Marketplace](https://marketplace.dify.ai/) ✅ | [开发 Extension 类型插件](../extension-plugin.md) ✅ | [开发 Model 类型插件](./) ✅ | [Bundle 类型插件:将多个插件打包](../bundle.md) ✅ | [Manifest](../../../schema-definition/manifest.md) ✅ | [Endpoint](../../../schema-definition/endpoint.md) ✅ | [反向调用 Dify 能力](../../../schema-definition/reverse-invocation-of-the-dify-service/) ✅ | [工具](../../../schema-definition/tool.md) ✅ | [模型](../../../schema-definition/model/) ✅ + * [接入自定义模型](plugins/quick-start/develop-plugins/model-plugin/customizable-model.md) ✅ | [Xinference 模型](https://inference.readthedocs.io/en/latest/) ✅ | [预定义模型类型](integrate-the-predefined-model.md) ✅ | [接口文档:Model](../../../schema-definition/model/) ✅ | [GitHub 代码仓库](https://github.com/langgenius/dify-official-plugins/tree/main/models/xinference) ✅ | [debug-plugins.md](../../debug-plugins.md) ❌ | [publish-to-dify-marketplace](../../../publish-plugins/publish-to-dify-marketplace/) ✅ | [开发 Extension 插件](../extension-plugin.md) ✅ | [开发 Tool 插件](../tool-plugin.md) ✅ | [Bundle 插件:将多个插件打包](../bundle.md) ✅ | [Manifest](../../../schema-definition/manifest.md) ✅ | [Endpoint](../../../schema-definition/endpoint.md) ✅ | [反向调用 Dify 能力](../../../schema-definition/reverse-invocation-of-the-dify-service/) ✅ | [工具](../../../schema-definition/tool.md) ✅ | [模型](../../../schema-definition/model/) ✅ + * [Agent 策略插件](plugins/quick-start/develop-plugins/agent-strategy-plugin.md) ✅ | [初始化开发工具](initialize-development-tools.md) ✅ | [示例代码](agent-strategy-plugin.md#diao-yong-gong-ju-1) ✅ | [示例代码](agent-strategy-plugin.md#diao-yong-gong-ju-1) ✅ | [“插件管理”](https://console-plugin.dify.dev/plugins) ❌ | [Dify Plugins 代码仓库](https://github.com/langgenius/dify-plugins) ✅ | [插件发布规范](https://docs.dify.ai/zh-hans/plugins/publish-plugins/publish-to-dify-marketplace) ✅ | [Dify Marketplace](https://marketplace.dify.ai/) ✅ | [完整实现代码](https://github.com/langgenius/dify-official-plugins/blob/main/agent-strategies/cot_agent/strategies/function_calling.py) ✅ + * [Extension 插件](plugins/quick-start/develop-plugins/extension-plugin.md) ✅ | [初始化开发工具](initialize-development-tools.md) ✅ | [接口文档](../../schema-definition/) ✅ | [Dify Plugins 代码仓库](https://github.com/langgenius/dify-plugins) ✅ | [插件发布规范](https://docs.dify.ai/zh-hans/plugins/publish-plugins/publish-to-dify-marketplace) ✅ | [Dify Marketplace](https://marketplace.dify.ai/) ✅ | [Tool 插件:Google Search](tool-plugin.md) ✅ | [Model 插件](model-plugin/) ✅ | [Bundle 插件:将多个插件打包](bundle.md) ✅ | [Manifest](../../schema-definition/manifest.md) ✅ | [Endpoint](../../schema-definition/endpoint.md) ✅ | [反向调用 Dify 能力](../../schema-definition/reverse-invocation-of-the-dify-service/) ✅ | [工具](../../schema-definition/tool.md) ✅ | [模型](../../schema-definition/model/) ✅ | [扩展 Agent 策略](../../schema-definition/agent.md) ✅ | [开发 Slack Bot 插件](../../best-practice/develop-a-slack-bot-plugin.md) ✅ + * [Bundle 插件包](plugins/quick-start/develop-plugins/bundle.md) ✅ | [初始化开发工具](initialize-development-tools.md) ✅ + * [插件调试](plugins/quick-start/debug-plugin.md) ✅ | [“插件管理”](https://cloud.dify.ai/plugins) ❌ +* [插件管理](plugins/manage-plugins.md) ✅ +* [接口定义](plugins/schema-definition/README.md) ✅ | [manifest.md](manifest.md) ✅ | [endpoint.md](endpoint.md) ✅ | [model.md](reverse-invocation-of-the-dify-service/model.md) ✅ | [general-specifications.md](general-specifications.md) ✅ | [persistent-storage.md](persistent-storage.md) ✅ | [reverse-invocation-of-the-dify-service](reverse-invocation-of-the-dify-service/) ✅ + * [Manifest](plugins/schema-definition/manifest.md) ✅ | [GitHub 代码仓库](https://github.com/langgenius/dify-official-plugins/blob/main/tools/google/manifest.yaml) ✅ | [工具](tool.md) ✅ | [模型](model/) ✅ | [Endpoints](endpoint.md) ✅ | [插件隐私政策准则](../publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.md) ✅ + * [Endpoint](plugins/schema-definition/endpoint.md) ✅ | [彩虹猫](../quick-start/develop-plugins/extension-plugin.md) ✅ | [Github 仓库](https://github.com/langgenius/dify-plugin-sdks/tree/main/python/examples/neko) ✅ | [ProviderConfig](general-specifications.md#providerconfig) ✅ + * [Tool](plugins/schema-definition/tool.md) ✅ | [快速开始开发插件:工具](../quick-start/develop-plugins/tool-plugin.md) ✅ | [`json_schema`](https://json-schema.org/) ✅ + * [Agent](plugins/schema-definition/agent.md) ✅ | [Sementic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/overview/) ✅ | [Manifest](/zh_CN/plugins/schema-definition/manifest) ❌ | [`Tool` 标准格式](tool.md) ✅ + * [Model](plugins/schema-definition/model/README.md) ✅ | [model-designing-rules.md](model-designing-rules.md) ✅ | [model-schema.md](model-schema.md) ✅ + * [模型设计规则](plugins/schema-definition/model/model-designing-rules.md) ✅ | [Provider](model-designing-rules.md#provider) ✅ | [AIModelEntity](model-designing-rules.md#aimodelentity) ✅ | [[ModelType](model-designing-rules.md#modeltype) ✅ | [[ConfigurateMethod](model-designing-rules.md#configuratemethod) ✅ | [[ProviderCredentialSchema](model-designing-rules.md#providercredentialschema) ✅ | [[ModelCredentialSchema](model-designing-rules.md#modelcredentialschema) ✅ | [[ParameterRule](model-designing-rules.md#parameterrule) ✅ | [[PriceConfig](model-designing-rules.md#priceconfig) ✅ | [[CredentialFormSchema](model-designing-rules.md#credentialformschema) ✅ | [[CredentialFormSchema](model-designing-rules.md#credentialformschema) ✅ | [[FormOption](model-designing-rules.md#formoption) ✅ | [[FormShowOnObject](model-designing-rules.md#formshowonobject) ✅ | [[FormShowOnObject](model-designing-rules.md#formshowonobject) ✅ + * [模型接口](plugins/schema-definition/model/model-schema.md) ✅ | [[PromptMessage](model-schema.md#promptmessage) ✅ | [UserPromptMessage](model-schema.md#userpromptmessage) ✅ | [SystemPromptMessage](model-schema.md#systempromptmessage) ✅ | [UserPromptMessage](model-schema.md#userpromptmessage) ✅ | [AssistantPromptMessage](model-schema.md#assistantpromptmessage) ✅ | [ToolPromptMessage](model-schema.md#toolpromptmessage) ✅ | [[PromptMessageTool](model-schema.md#promptmessagecontent) ✅ | [[LLMResultChunk](model-schema.md#llmresultchunk) ✅ | [LLMResult](model-schema.md#llmresult) ✅ | [[LLMResultChunk](model-schema.md#llmresultchunk) ✅ | [LLMResult](model-schema.md#llmresult) ✅ | [OpenAI](https://github.com/langgenius/dify-official-plugins/tree/main/models/openai) ✅ | [TextEmbeddingResult](model-schema.md#textembeddingresult) ✅ | [RerankResult](model-schema.md#rerankresult) ✅ + * [通用规范定义](plugins/schema-definition/general-specifications.md) ✅ | [IETF BCP 47](https://tools.ietf.org/html/bcp47) ✅ | [I18nObject](general-specifications.md#i18nobject) ✅ | [IETF BCP 47](https://tools.ietf.org/html/bcp47) ✅ | [provider\_config\_type](general-specifications.md#providerconfigtype-string) ✅ | [provider\_config\_scope](general-specifications.md#providerconfigscope-string) ✅ | [[provider\_config\_option](general-specifications.md#providerconfigoption-object) ✅ | [IETF BCP 47](https://tools.ietf.org/html/bcp47) ✅ | [IETF BCP 47](https://tools.ietf.org/html/bcp47) ✅ | [IETF BCP 47](https://tools.ietf.org/html/bcp47) ✅ + * [持久化存储](plugins/schema-definition/persistent-storage.md) ✅ + * [反向调用 Dify 服务](plugins/schema-definition/reverse-invocation-of-the-dify-service/README.md) ✅ | [App](app.md) ✅ | [Model](model.md) ✅ | [Tool](tool.md) ✅ | [Node](node.md) ✅ + * [App](plugins/schema-definition/reverse-invocation-of-the-dify-service/app.md) ✅ + * [Model](plugins/schema-definition/reverse-invocation-of-the-dify-service/model.md) ✅ | [通用规范定义](../general-specifications.md) ✅ + * [Tool](plugins/schema-definition/reverse-invocation-of-the-dify-service/tool.md) ✅ | [此文档](tool.md#diao-yong-workflow-as-tool) ✅ + * [Node](plugins/schema-definition/reverse-invocation-of-the-dify-service/node.md) ✅ | [文档](../general-specifications.md#noderesponse) ✅ +* [最佳实践](plugins/best-practice/README.md) ✅ | [develop-a-slack-bot-plugin.md](develop-a-slack-bot-plugin.md) ✅ + * [开发 Slack Bot 插件](plugins/best-practice/develop-a-slack-bot-plugin.md) ✅ | [初始化开发工具](../initialize-development-tools.md) ❌ | [Python 安装教程](https://pythontest.com/python/installing-python-3-11/) ✅ | [Slack API](https://api.slack.com/apps) ✅ | [快速开始:开发 Extension 插件](../extension-plugin.md) ❌ | [反向调用:App](../../../schema-definition/reverse-invocation-of-the-dify-service/app.md) ❌ | [Dify Marketplace 仓库](https://github.com/langgenius/dify-plugins) ✅ | [插件发布规范](https://docs.dify.ai/zh-hans/plugins/publish-plugins/publish-to-dify-marketplace) ✅ | [Github 代码仓库](https://github.com/langgenius/dify-plugins) ✅ | [开发 Extension 插件](../extension-plugin.md) ❌ | [开发 Model 插件](../model-plugin/) ❌ | [Bundle 类型插件:将多个插件打包](../bundle.md) ❌ | [Manifest](../../../schema-definition/manifest.md) ❌ | [Endpoint](../../../schema-definition/endpoint.md) ❌ | [反向调用 Dify 能力](../../../schema-definition/reverse-invocation-of-the-dify-service/) ❌ | [工具](../../../schema-definition/tool.md) ❌ | [模型](../../../schema-definition/model/) ❌ +* [发布插件](plugins/publish-plugins/README.md) ✅ | [代码仓库](https://github.com/langgenius/dify-plugins) ✅ | [publish-to-dify-marketplace](publish-to-dify-marketplace/) ✅ | [publish-plugin-on-personal-github-repo.md](publish-plugin-on-personal-github-repo.md) ✅ | [package-plugin-file-and-publish.md](package-plugin-file-and-publish.md) ✅ + * [发布至 Dify Marketplace](plugins/publish-plugins/publish-to-dify-marketplace/README.md) ✅ | [GitHub 代码仓库](https://github.com/langgenius/dify-plugins) ✅ | [插件开发者准则](plugin-developer-guidelines.md) ✅ | [插件隐私政策准则](plugin-privacy-protection-guidelines.md) ✅ | [Manifest 文件](../../schema-definition/manifest.md) ✅ | [Dify Plugins](https://github.com/langgenius/dify-plugins) ✅ | [Dify Marketplace](https://marketplace.dify.ai/) ✅ | [插件开发者准则](plugin-developer-guidelines.md) ✅ | [Manifest 文件](../../schema-definition/manifest.md) ✅ | [插件开发说明](../../quick-start/develop-plugins/) ✅ | [Dify.AI](https://dify.ai/) ✅ + * [插件开发者准则](plugins/publish-plugins/publish-to-dify-marketplace/plugin-developer-guidelines.md) ✅ | [插件隐私政策准则](plugin-privacy-protection-guidelines.md) ✅ + * [插件隐私政策准则](plugins/publish-plugins/publish-to-dify-marketplace/plugin-privacy-protection-guidelines.md) ✅ | [Slack 的隐私政策](https://slack.com/trust/privacy/privacy-policy) ✅ | [Manifest](../../schema-definition/manifest.md) ✅ + * [发布至个人 GitHub 仓库](plugins/publish-plugins/publish-plugin-on-personal-github-repo.md) ✅ | [GitHub 文档](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository) ✅ | [打包插件](broken-reference) ❌ + * [本地发布与分享](plugins/publish-plugins/package-plugin-file-and-publish.md) ✅ | [初始化开发工具](../quick-start/develop-plugins/initialize-development-tools.md) ✅ | [远程连接测试](../quick-start/develop-plugins/extension-plugin.md#tiao-shi-cha-jian) ✅ +* [常见问题](plugins/faq.md) ✅ + +## 研发 + +* [后端](development/backend/README.md) ✅ + * [DifySandbox](development/backend/sandbox/README.md) ✅ | [DifySandbox](https://github.com/langgenius/dify-sandbox) ✅ | [贡献指南](contribution.md) ✅ + * [贡献指南](development/backend/sandbox/contribution.md) ✅ +* [模型接入](development/models-integration/README.md) ✅ + * [接入 Hugging Face 上的开源模型](development/models-integration/hugging-face.md) ✅ | [text-generation](https://huggingface.co/models?pipeline\_tag=text-generation\&sort=trending) ✅ | [text2text-generation](https://huggingface.co/models?pipeline\_tag=text2text-generation\&sort=trending) ✅ | [feature-extraction](https://huggingface.co/models?pipeline\_tag=feature-extraction\&sort=trending) ✅ | [注册地址](https://huggingface.co/join) ❌ | [获取地址](https://huggingface.co/settings/tokens) ✅ | [Hugging Face 模型列表页](https://huggingface.co/models) ✅ | [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/guides/access) ✅ | [用户名](https://huggingface.co/settings/account) ✅ | [组织名称](https://ui.endpoints.huggingface.co/) ✅ + * [接入 Replicate 上的开源模型](development/models-integration/replicate.md) ✅ | [Language models](https://replicate.com/collections/language-models) ✅ | [Embedding models](https://replicate.com/collections/embedding-models) ✅ | [注册地址](https://replicate.com/signin?next=/docs) ✅ | [获取地址](https://replicate.com/account/api-tokens) ✅ | [Language models](https://replicate.com/collections/language-models) ✅ | [Embedding models](https://replicate.com/collections/embedding-models) ✅ + * [接入 Xinference 部署的本地模型](development/models-integration/xinference.md) ✅ | [Xorbits inference](https://github.com/xorbitsai/inference) ✅ | [本地部署](https://github.com/xorbitsai/inference/blob/main/README\_zh\_CN.md#%E6%9C%AC%E5%9C%B0%E9%83%A8%E7%BD%B2) ❌ | [分布式部署](https://github.com/xorbitsai/inference/blob/main/README\_zh\_CN.md#%E5%88%86%E5%B8%83%E5%BC%8F%E9%83%A8%E7%BD%B2) ❌ | [Xinference 内置模型](https://inference.readthedocs.io/en/latest/models/builtin/index.html) ✅ | [Xinference embed 模型](https://github.com/xorbitsai/inference/blob/main/README\_zh\_CN.md#%E5%86%85%E7%BD%AE%E6%A8%A1%E5%9E%8B) ❌ | [Xorbits Inference](https://github.com/xorbitsai/inference/blob/main/README\_zh\_CN.md) ❌ + * [接入 OpenLLM 部署的本地模型](development/models-integration/openllm.md) ✅ | [OpenLLM](https://github.com/bentoml/OpenLLM) ✅ | [支持的模型列表](https://github.com/bentoml/OpenLLM#-supported-models) ✅ | [OpenLLM](https://github.com/bentoml/OpenLLM) ✅ + * [接入 LocalAI 部署的本地模型](development/models-integration/localai.md) ✅ | [LocalAI](https://github.com/go-skynet/LocalAI) ✅ | [Getting Started](https://localai.io/basics/getting_started/) ✅ | [LocalAI Data query example](https://github.com/go-skynet/LocalAI/blob/master/examples/langchain-chroma/README.md) ❌ + * [接入 Ollama 部署的本地模型](development/models-integration/ollama.md) ✅ | [Ollama](https://github.com/jmorganca/ollama) ✅ | [Ollama 下载页](https://ollama.com/download) ✅ | [Ollama Models](https://ollama.com/library) ✅ | [常见问题](#如何在我的网络上暴露-ollama) ✅ | [Ollama](https://github.com/jmorganca/ollama) ✅ | [Ollama FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md) ✅ + * [接入 LiteLLM 代理的模型](development/models-integration/litellm.md) ✅ | [点击这里](https://example.com) ✅ | [LiteLLM](https://example.com) ✅ | [LiteLLM Proxy Server](https://example.com) ✅ + * [接入 GPUStack 进行本地模型部署](development/models-integration/gpustack.md) ✅ | [GPUStack](https://github.com/gpustack/gpustack) ✅ | [文档](https://docs.gpustack.ai) ✅ | [Github 仓库](https://github.com/gpustack/gpustack) ✅ + * [接入 AWS Bedrock 上的模型(DeepSeek)](development/models-integration/aws-bedrock-deepseek.md) ✅ | [AWS Bedrock Marketplace](https://aws.amazon.com/bedrock/marketplace/) ✅ | [Bedrock](https://aws.amazon.com/bedrock/) ✅ | [Dify.AI 账号](https://cloud.dify.ai/) ❌ +* [迁移](development/migration/README.md) ✅ + * [将社区版迁移至 v1.0.0](development/migration/migrate-to-v1.md) ✅ | [v1.0.0](https://github.com/langgenius/dify/releases/tag/1.0.0) ✅ | [Dify 项目](https://github.com/langgenius/dify) ✅ | [文档](https://docs.dify.ai/zh-hans/getting-started/install-self-hosted/docker-compose) ✅ + +## 阅读更多 + +* [应用案例](learn-more/use-cases/README.md) ✅ + * [DeepSeek 与 Dify 集成指南:打造具备多轮思考的 AI 应用](learn-more/use-cases/integrate-deepseek-to-build-an-ai-app.md) ✅ | [DeepSeek](https://www.deepseek.com/) ✅ | [**本地私有化部署 DeepSeek + Dify**](broken-reference) ❌ | [DeepSeek API 开放平台](https://platform.deepseek.com/) ✅ | [本地部署指南](broken-reference) ❌ | [RAG(检索增强生成)](https://docs.dify.ai/zh-hans/learn-more/extended-reading/retrieval-augment) ✅ | [创建知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/create-knowledge-and-upload-documents) ✅ | [工作流](https://docs.dify.ai/zh-hans/guides/workflow) ✅ | [文件上传](https://docs.dify.ai/zh-hans/guides/workflow/file-upload) ✅ | [本地部署 DeepSeek + Dify,构建你的专属私有 AI 助手](broken-reference) ❌ + * [本地私有化部署 DeepSeek + Dify,构建你的专属私人 AI 助手](learn-more/use-cases/private-ai-ollama-deepseek-dify.md) ✅ | [Docker](https://www.docker.com/) ✅ | [Ollama](https://ollama.com) ✅ | [Dify 社区版](https://github.com/langgenius/dify) ✅ | [Ollama](https://ollama.com/) ✅ | [Ollama 官网](https://ollama.com/) ✅ | [Docker Compose 部署](https://docs.dify.ai/zh-hans/getting-started/install-self-hosted/docker-compose) ✅ | [常见问题](https://docs.dify.ai/zh-hans/learn-more/use-cases/private-ai-ollama-deepseek-dify#id-1.-docker-bu-shu-shi-de-lian-jie-cuo-wu) ✅ | [DeepSeek 模型说明](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) ✅ | [工作流文档](https://docs.dify.ai/zh-hans/guides/workflow) ✅ | [负载均衡](https://docs.dify.ai/zh-hans/guides/model-configuration/load-balancing) ✅ | [异常处理](https://docs.dify.ai/zh-hans/guides/workflow/error-handling) ✅ + * [如何训练出专属于“你”的问答机器人?](learn-more/use-cases/train-a-qa-chatbot-that-belongs-to-you.md) ✅ | [《如何用狗屁通(GPT )解决一个套娃问题》](http://mp.weixin.qq.com/s?__biz=MzU2Njg1NDA3Mw==\&mid=2247484248\&idx=1\&sn=50809b40f520c767483e1a7b0eefb9c1\&chksm=fca76b8ecbd0e298e627140d63e7b3383d226ab293a2e8fefa04b5a1ee12f187520560ec1579\&scene=21#wechat_redirect) ✅ | [此地址](https://udify.app/chat/F2Y4bKEWbuCb1FTC) ✅ + * [教你十几分钟不用代码创建 Midjourney 提示词机器人](learn-more/use-cases/create-a-midjoureny-prompt-word-robot-with-zero-code.md) ✅ + * [构建一个 Notion AI 助手](learn-more/use-cases/build-an-notion-ai-assistant.md) ✅ | [​](https://wsyfin.com/notion-dify#1-login-to-dify) ✅ | [项目](https://github.com/langgenius/dify) ✅ | [​](https://wsyfin.com/notion-dify#5-create-your-own-ai-application) ✅ + * [如何在几分钟内创建一个带有业务数据的官网 AI 智能客服](learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md) ✅ + * [使用全套开源工具构建 LLM 应用实战:在 Dify 调用 Baichuan 开源模型能力](learn-more/use-cases/practical-implementation-of-building-llm-applications-using-a-full-set-of-open-source-tools.md) ✅ | [Hugging Face](https://huggingface.co/) ✅ | [LocalAI](https://github.com/go-skynet/LocalAI) ✅ | [openLLM](https://github.com/bentoml/OpenLLM) ✅ | [Dify.AI](https://github.com/langgenius/dify) ✅ | [OpenLLM](https://github.com/bentoml/OpenLLM) ✅ | [Xorbits inference](https://github.com/xorbitsai/inference) ✅ | [官网文档](https://docs.conda.io/projects/conda/en/stable/user-guide/install/index.html) ✅ | [官网](https://developer.nvidia.com/cuda-downloads?target_os=Windows\&target_arch=x86_64\&target_version=11\&target_type=exe_local) ✅ | [微软官方指引](https://learn.microsoft.com/en-us/windows/wsl/install) ✅ | [官方文档](https://docs.docker.com/desktop/install/windows-install/#wsl-2-backend) ✅ | [博客](https://www.cnblogs.com/tuilk/p/16287472.html) ❌ | [官网](https://pytorch.org/) ✅ | [部署文档](https://docs.dify.ai/v/zh-hans/advanced/model-configuration/xinference) ✅ | [Xorbits inference](https://github.com/xorbitsai/inference) ✅ | [http://localhost:9997](http://localhost:9997) ❌ | [Xinference 内置模型](https://inference.readthedocs.io/en/latest/models/builtin/index.html) ✅ | [部署文档](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) ✅ + * [手把手教你把 Dify 接入微信生态](learn-more/use-cases/dify-on-wechat.md) ✅ | [Dify on WeChat](https://github.com/hanfangyuan4396/dify-on-wechat) ✅ | [欢迎使用 Dify | 中文 | Dify](https://docs.dify.ai/v/zh-hans) ✅ | [Dify官方应用平台](https://cloud.dify.ai/signin) ✅ | [Docker Compose 部署 | 中文 | Dify](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) ✅ | [Dify on WeChat](https://github.com/hanfangyuan4396/dify-on-wechat) ✅ | [ ChatGPT on WeChat](https://github.com/zhayujie/chatgpt-on-wechat) ✅ | [Dify](https://github.com/langgenius/dify) ✅ | [Dify on WeChat](https://github.com/hanfangyuan4396/dify-on-wechat) ✅ | [python官网](https://www.python.org/downloads/) ✅ | [dify文档仓库](../../guides/workflow/) ✅ | [点击此处下载](https://github.com/hanfangyuan4396/dify-on-wechat/blob/master/dsl/chat-workflow.yml) ✅ | [Dify on WeChat](https://github.com/hanfangyuan4396/dify-on-wechat) ✅ | [issue1525](https://github.com/zhayujie/chatgpt-on-wechat/issues/1525) ✅ | [官方下载链接](https://dldir1.qq.com/wework/work\_weixin/WeCom\_4.0.8.6027.exe) ❌ | [备用下载链接](https://www.alipan.com/s/UxQHrZ5WoxS) ✅ | [ntwork-whl](https://github.com/hanfangyuan4396/ntwork-bin-backup/tree/main/ntwork-whl) ✅ | [Dify on WeChat](https://github.com/hanfangyuan4396/dify-on-wechat) ✅ + * [使用 Dify 和 Twilio 构建 WhatsApp 机器人](learn-more/use-cases/dify-on-whatsapp.md) ✅ | [Microsoft 最有價值專家 (MVP)](https://mvp.microsoft.com/en-US/mvp/profile/476f41d3-6bd1-ea11-a812-000d3a8dfe0d) ✅ | [這裡](https://www.twilio.com/try-twilio) ✅ | [手把手教你把 Dify 接入微信生态](dify-on-wechat.md) ✅ | [Dify官方应用平台](https://cloud.dify.ai/signin) ✅ | [Docker Compose 部署 | 中文 | Dify](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) ✅ | [Twilio控制台](https://console.twilio.com/) ✅ + * [将 Dify 应用与钉钉机器人集成](learn-more/use-cases/dify-on-dingtalk.md) ✅ | [Dify-on-Dingtalk](https://github.com/zfanswer/dify-on-dingtalk) ✅ | [Dify-on-Dingtalk](https://github.com/zfanswer/dify-on-dingtalk) ✅ | [Dify 官方文档](https://docs.dify.ai/v/zh-hans/guides/application-orchestrate/creating-an-application) ✅ | [钉钉开发平台](https://open-dev.dingtalk.com/) ✅ | [卡片平台](https://open-dev.dingtalk.com/fe/card) ✅ | [README.md](https://github.com/zfanswer/dify-on-dingtalk/blob/main/README.md#env%25) ✅ | [该项目](https://github.com/zfanswer/dify-on-dingtalk) ✅ + * [使用 Dify 和 Azure Bot Framework 构建 Microsoft Teams 机器人](learn-more/use-cases/dify-on-teams.md) ✅ | [Microsoft 最有价值专家 (MVP)](https://mvp.microsoft.com/en-US/mvp/profile/476f41d3-6bd1-ea11-a812-000d3a8dfe0d) ✅ | [Azure 账户](https://azure.microsoft.com/en-us/free) ❌ | [Dify 平台](https://cloud.dify.ai/signin) ✅ | [Docker Compose 部署](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/docker-compose) ✅ | [Azure Marketplace](https://portal.azure.com/#view/Microsoft\_Azure\_Marketplace/GalleryItemDetailsBladeNopdl/id/Microsoft.AzureBot/selectionMode\~/false/resourceGroupId//resourceGroupLocation//dontDiscardJourney\~/false/selectedMenuId/home/launchingContext\~/%7B%22galleryItemId%22%3A%22Microsoft.AzureBot%22%2C%22source%22%3A%5B%22GalleryFeaturedMenuItemPart%22%2C%22VirtualizedTileDetails%22%5D%2C%22menuItemId%22%3A%22home%22%2C%22subMenuItemId%22%3A%22Search%20results%22%2C%22telemetryId%22%3A%22a09b3b54-129b-475f-bd39-d7285a272043%22%7D/searchTelemetryId/258b225f-e7d5-4744-bfe4-69fa701d9d5a) ✅ + * [如何让 LLM 应用提供循序渐进的聊天体验?](learn-more/use-cases/how-to-make-llm-app-provide-a-progressive-chat-experience.md) ✅ | [《工作流》](https://docs.dify.ai/v/zh-hans/guides/workflow) ✅ + * [如何将 Dify Chatbot 集成至 Wix 网站?](learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.md) ✅ | [Dify AI 应用](https://docs.dify.ai/v/zh-hans/guides/application-orchestrate/creating-an-application) ✅ + * [如何连接 AWS Bedrock 知识库?](learn-more/use-cases/how-to-connect-aws-bedrock.md) ✅ | [AWS Bedrock](https://aws.amazon.com/bedrock/) ✅ | [API 定义](../../guides/knowledge-base/external-knowledge-api-documentation.md) ✅ | [后续步骤](how-to-connect-aws-bedrock.md#id-5.-lian-jie-wai-bu-zhi-shi-ku) ✅ | [第二步](how-to-connect-aws-bedrock.md#id-2.-gou-jian-hou-duan-api-fu-wu) ✅ | [第二步](how-to-connect-aws-bedrock.md#id-2.-gou-jian-hou-duan-api-fu-wu) ✅ | [第四步](how-to-connect-aws-bedrock.md#id-4.-guan-lian-wai-bu-zhi-shi-api) ✅ | [第三步](how-to-connect-aws-bedrock.md#id-3.-huo-qu-aws-bedrock-knowledge-base-id) ✅ + * [构建 Dify 应用定时任务助手](learn-more/use-cases/dify-schedule.md) ✅ | [Leo\_chen](https://github.com/leochen-g) ✅ | [Dify Schedule](https://github.com/leochen-g/dify-schedule) ✅ | [智能微秘书](https://github.com/leochen-g/wechat-assistant-pro) ✅ | [Dify Workflow 定时助手代码仓库](https://github.com/leochen-g/dify-schedule) ✅ | [Pushplus](http://www.pushplus.plus/) ✅ | [Server 酱](https://sct.ftqq.com/) ✅ | [智能微秘书](https://wechat.aibotk.com/?r=dBL0Bn\&f=difySchedule) ✅ | [项目地址](https://github.com/whyour/qinglong) ✅ | [项目地址](https://github.com/whyour/qinglong) ✅ | [Dify 定时任务项目](https://github.com/leochen-g/dify-schedule) ✅ + * [如何在 Dify 内体验大模型“竞技场”?以 DeepSeek R1 VS o1 为例](learn-more/use-cases/dify-model-arena.md) ✅ | [“多模型调试”](/zh_CN/guides/application-orchestrate/multiple-llms-debugging.md) ❌ | [“增加新供应商”](https://docs.dify.ai/zh-hans/guides/model-configuration) ✅ | [多模型调试](/zh_CN/guides/application-orchestrate/multiple-llms-debugging.md) ❌ + * [在 Dify 云端构建 AI Thesis Slack Bot](learn-more/use-cases/building-an-ai-thesis-slack-bot.md) ✅ | [Slack 官方网站](https://slack.com/intl/en-gb/get-started?entry_point=help_center#/createnew) ✅ | [Slack API 管理页面](https://api.slack.com/apps) ✅ +* [扩展阅读](learn-more/extended-reading/README.md) ✅ + * [什么是 LLMOps?](learn-more/extended-reading/what-is-llmops.md) ✅ + * [什么是数组变量?](learn-more/extended-reading/what-is-array-variable.md) ✅ | [工作流 - 列表操作](../../guides/workflow/node/list-operator.md) ✅ | [工作流 - 迭代](../../guides/workflow/node/iteration.md) ✅ + * [检索增强生成(RAG)](learn-more/extended-reading/retrieval-augment/README.md) ✅ + * [混合检索](learn-more/extended-reading/retrieval-augment/hybrid-search.md) ✅ + * [重排序](learn-more/extended-reading/retrieval-augment/rerank.md) ✅ | [https://cohere.com/rerank](https://cohere.com/rerank) ✅ | [《多路召回》](https://docs.dify.ai/v/zh-hans/guides/knowledge-base/integrate-knowledge-within-application#duo-lu-zhao-hui-tui-jian) ✅ + * [召回模式](learn-more/extended-reading/retrieval-augment/retrieval.md) ✅ | [重排序](https://docs.dify.ai/v/zh-hans/learn-more/extended-reading/retrieval-augment/rerank) ✅ + * [提示词编排](learn-more/extended-reading/prompt-engineering.md) ✅ | [Learn Prompting](https://learnprompting.org/zh-Hans/) ✅ | [ChatGPT Prompt Engineering for Developers](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/) ✅ | [Awesome ChatGPT Prompts](https://github.com/f/awesome-chatgpt-prompts) ✅ + * [如何使用 JSON Schema 让 LLM 输出遵循结构化格式的内容?](learn-more/extended-reading/how-to-use-json-schema-in-dify.md) ✅ | [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs/introduction) ✅ | [此处](https://platform.openai.com/docs/guides/structured-outputs/supported-schemas) ✅ | [additionalProperties:false](https://platform.openai.com/docs/guides/structured-outputs/additionalproperties-false-must-always-be-set-in-objects) ✅ | [Introduction to Structured Outputs](https://cookbook.openai.com/examples/structured\_outputs\_intro) ❌ | [Structured Output](https://platform.openai.com/docs/guides/structured-outputs/json-mode?context=without\_parse) ✅ +* [常见问题](learn-more/faq/README.md) ✅ | [本地部署相关常见问题](https://docs.dify.ai/v/zh-hans/getting-started/faq/install-faq) ✅ | [LLM 配置与使用相关常见问题](https://docs.dify.ai/v/zh-hans/getting-started/faq/llms-use-faq) ✅ + * [本地部署](learn-more/faq/install-faq.md) ✅ | [环境变量](../../getting-started/install-self-hosted/environments.md) ✅ | [**Notion 的集成配置地址**](https://www.notion.so/my-integrations) ✅ | [环境变量说明文档](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments) ✅ | [环境变量说明文档](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments) ✅ | [Issue](https://github.com/langgenius/dify/issues/1261) ✅ | [FFmpeg 官方网站](https://ffmpeg.org/download.html) ✅ | [这篇](https://portswigger.net/web-security/ssrf) ✅ | [squid的配置文档](http://www.squid-cache.org/Doc/config/) ✅ | [business@dify.ai](mailto:business@dify.ai) ✅ | [内容安全策略](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CSP) ✅ + * [LLM 配置与使用](learn-more/faq/llms-use-faq.md) ✅ | [文档](https://platform.openai.com/docs/models/overview) ✅ | [余弦相似度](https://en.wikipedia.org/wiki/Cosine\_similarity) ❌ | [OpenAI 官方文档说明](https://platform.openai.com/docs/guides/rate-limits) ✅ | [文档](https://docs.dify.ai/v/zh-hans/getting-started/faq/install-faq#11.-ben-di-bu-shu-ban-ru-he-jie-jue-shu-ju-ji-wen-dang-shang-chuan-de-da-xiao-xian-zhi-he-shu-liang) ✅ | [此文档编写技巧](https://docs.dify.ai/v/zh-hans/advanced/datasets) ✅ | [OpenAI 定价文档](https://openai.com/pricing) ✅ | [OpenAI 官方文档](https://platform.openai.com/account/billing/overview) ✅ + * [插件](learn-more/faq/plugins.md) ✅ + +## 政策 + +* [开源许可证](policies/open-source.md) ✅ | [LICENSE](https://github.com/langgenius/dify/blob/main/LICENSE) ✅ | [business@dify.ai](mailto:business@dify.ai) ✅ +* [用户协议](policies/agreement/README.md) ✅ | [服务条款](https://dify.ai/terms) ✅ | [隐私政策](https://dify.ai/privacy) ✅ | [get-compliance-report.md](get-compliance-report.md) ✅ + * [服务条款](https://dify.ai/terms) ✅ + * [隐私政策](https://dify.ai/privacy) ✅ + * [获取合规报告](policies/agreement/get-compliance-report.md) ✅ | [定价页](https://dify.ai/pricing) ✅ diff --git a/zh-hans/user-guide/build-app/agent.mdx b/zh-hans/user-guide/build-app/agent.mdx deleted file mode 100644 index aecb7744..00000000 --- a/zh-hans/user-guide/build-app/agent.mdx +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: Agent -version: '简体中文' ---- - -### 定义 - -智能助手(Agent Assistant),利用大语言模型的推理能力,能够自主对复杂的人类任务进行目标规划、任务拆解、工具调用、过程迭代,并在没有人类干预的情况下完成任务。 - -### 如何使用智能助手 - -为了方便快速上手使用,您可以在“探索”中找到智能助手的应用模板,添加到自己的工作区,或者在此基础上进行自定义。在全新的 Dify 工作室中,你也可以从零编排一个专属于你自己的智能助手,帮助你完成财务报表分析、撰写报告、Logo 设计、旅程规划等任务。 - -![](/zh-cn/img/40d3342c49dc56350356eade970454c3.png) - -选择智能助手的推理模型,智能助手的任务完成能力取决于模型推理能力,我们建议在使用智能助手时选择推理能力更强的模型系列如 gpt-4 以获得更稳定的任务完成效果。 - -![](/zh-cn/img/6c538b11885e24394906e3b7e5c19b17.png) - -你可以在“提示词”中编写智能助手的指令,为了能够达到更优的预期效果,你可以在指令中明确它的任务目标、工作流程、资源和限制等。 - -![](/zh-cn/img/a04ab941483c2d50633011a6ad5b69fb.png) - -### 添加助手需要的工具 - -在“上下文”中,你可以添加智能助手可以用于查询的知识库工具,这将帮助它获取外部背景知识。 - -在“工具”中,你可以添加需要使用的工具。工具可以扩展 LLM 的能力,比如联网搜索、科学计算或绘制图片,赋予并增强了 LLM 连接外部世界的能力。Dify 提供了两种工具类型:**第一方工具**和**自定义工具**。 - -你可以直接使用 Dify 生态提供的第一方内置工具,或者轻松导入自定义的 API 工具(目前支持 OpenAPI / Swagger 和 OpenAI Plugin 规范)。 - -![](/zh-cn/img/e5cfc25900fd8b2ca8e9c88402affd72.png) - -“工具”功能允许用户借助外部能力,在 Dify 上创建出更加强大的 AI 应用。例如你可以为智能助理型应用(Agent)编排合适的工具,它可以通过任务推理、步骤拆解、调用工具完成复杂任务。 - -另外工具也可以方便将你的应用与其他系统或服务连接,与外部环境交互。例如代码执行、对专属信息源的访问等。你只需要在对话框中谈及需要调用的某个工具的名字,即可自动调用该工具。 - -![](/zh-cn/user-guide/.gitbook/assets/zh-agent-dalle3.png) - -### 配置 Agent - -在 Dify 上为智能助手提供了 Function calling(函数调用)和 ReAct 两种推理模式。已支持 Function Call 的模型系列如 gpt-3.5/gpt-4 拥有效果更佳、更稳定的表现,尚未支持 Function calling 的模型系列,我们支持了 ReAct 推理框架实现类似的效果。 - -在 Agent 配置中,你可以修改助手的迭代次数限制。 - -![](/zh-cn/img/85526d52781ea5b45c8e54679af90c03.png) - -![](/zh-cn/img/876dfa7271d17166f160638ca43eeb21.png) - -### 配置对话开场白 - -您可以为智能助手配置一套会话开场白和开场问题,配置的对话开场白将在每次用户初次对话中展示助手可以完成什么样的任务,以及可以提出的问题示例。 - -![](/zh-cn/img/d4adbbfb1c6a6d020e06972713ad7c7c.png) - -### 调试与预览 - -编排完智能助手之后,你可以在发布成应用之前进行调试与预览,查看助手的任务完成效果。 - -![](/zh-cn/img/651df066bc2dc78deffed189b718b8b4.png) - -### 应用发布 - -![](/zh-cn/img/1eb90b9b3d0905e525010ad3376cb626.png) diff --git a/zh-hans/user-guide/build-app/chatbot.mdx b/zh-hans/user-guide/build-app/chatbot.mdx deleted file mode 100644 index c1bfc2ea..00000000 --- a/zh-hans/user-guide/build-app/chatbot.mdx +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: 对话型应用 -version: '简体中文' ---- - -对话型应用采用一问一答模式与用户持续对话。 - -## 适用场景 - -对话型应用可以用在客户服务、在线教育、医疗保健、金融服务等领域。这些应用可以帮助组织提高工作效率、减少人工成本和提供更好的用户体验。 - -## 如何编排 - -对话型应用的编排支持:对话前提示词,变量,上下文,开场白和下一步问题建议。 - -下面以做一个 **面试官** 的应用为例来介绍编排对话型应用。 - -### 创建应用 - -在首页点击 "创建应用" 按钮创建应用。填上应用名称,应用类型选择**聊天助手**。 - -![](/zh-cn/img/5d9711956466aa6b5237d0c3b6e97f81.png) - -### 编排应用 - -创建应用后会自动跳转到应用概览页,你可以在此处为聊天应用设置变量、添加上下文以及额外的聊天功能。 - -![](/zh-cn/img/e008292270ae03d31b83bba99253f3a4.png) - -#### 应用编排 - -**填写提示词** - -提示词用于约束 AI 给出专业的回复,让回应更加精确。你可以借助内置的提示生成器,编写合适的提示词。提示词内支持插入表单变量,例如 `{{input}}`。提示词中的变量的值会替换成用户填写的值。 - -示例: -1. 输入提示指令,要求给出一段面试场景的提示词。 -2. 右侧内容框将自动生成提示词。 -3. 你可以在提示词内插入自定义变量。 - -![](/zh-cn/img/e3b94de2caed9bfc57362effa1b86de5.png) - -为了更好的用户体验,可以加上对话开场白:`你好,{{name}}。我是你的面试官,Bob。你准备好了吗?`。点击页面底部的 "添加功能" 按钮,打开 "对话开场白" 的功能: - -![](/zh-cn/img/4bf2bc75b3ce6b328efc461682a6ec78.png) - -编辑开场白时,还可以添加数个开场问题: - -![](/zh-cn/img/2698f7edc53d7886007c6c78faba698e.png) - -#### 添加上下文 - -如果想要让 AI 的对话范围局限在知识库内,例如企业内的客服话术规范,可以在"上下文"内引用知识库。 - -![](/zh-cn/img/f99d63b2c6ab6c5bd20d036374cd803c.png) - -### 调试 - -在右侧填写用户输入项,输入内容进行调试。 - -![](/zh-cn/img/87c80411733a228aadbf3ba0a4fdacb1.png) - -如果回答结果不理想,可以调整提示词和底层模型。你也可以使用多个模型同步进行调试,搭配出合适的配置。 - -![](/zh-cn/img/c56f9168146fe003847e5551d41c5023.png) - -**多个模型进行调试:** - -如果使用单一模型调试时感到效率低下,你也可以使用 **"多个模型进行调试"** 功能,批量检视模型的回答效果。 - -![](/zh-cn/img/ce4d5b43f6fc051cb6544c9560fc73dc.png) - -最多支持同时添加 4 个大模型。 - -![](/zh-cn/img/f533501fa1e8d07701825cdf7dd024c7.png) - -⚠️ 使用多模型调试功能时,如果仅看到部分大模型,这是因为暂未添加其它大模型的 Key。你可以在"增加新供应商"内手动添加多个模型的 Key。 - -### 发布应用 - -调试好应用后,点击右上角的 **"发布"** 按钮生成独立的 AI 应用。除了通过公开 URL 体验该应用,你也进行基于 APIs 的二次开发、嵌入至网站内等操作。详情请参考[发布应用](/zh-cn/user-guide/application-publishing/launch-your-webapp-quickly/web-app-settings)。 - -如果想定制已发布的应用,可以 Fork 我们的开源的 WebApp 的模版。基于模版改成符合你的情景与风格需求的应用。 - -## 常见问题 - -**如何在聊天助手内添加第三方工具?** - -聊天助手类型应用不支持添加第三方工具,你可以在 Agent 类型应用内添加第三方工具。 - -**如何在创建聊天助手应用时,使用元数据功能筛选知识库内文档?** - -如需了解如何使用元数据功能筛选文档,请参阅 [在应用内集成知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/integrate-knowledge-within-application) 中的“**使用元数据筛选知识 > 聊天助手**”章节。 diff --git a/zh-hans/user-guide/build-app/flow-app/additional-feature.mdx b/zh-hans/user-guide/build-app/flow-app/additional-feature.mdx deleted file mode 100644 index 694b19c7..00000000 --- a/zh-hans/user-guide/build-app/flow-app/additional-feature.mdx +++ /dev/null @@ -1,137 +0,0 @@ ---- -title: 附加功能 -version: '简体中文' ---- - -Workflow 和 Chatflow 应用均支持开启附加功能以增强使用者的交互体验。例如添加文件上传入口、给 LLM 应用添加一段自我介绍或使用欢迎语,让应用使用者获得更加丰富的交互体验。 - -点击应用右上角的 **"功能"** 按钮即可为应用添加更多功能。 - - - -### Workflow - -> 不再推荐使用该方法为 Workflow 应用添加文件上传功能。建议应用开发者改用自定义文件变量为 Workflow 应用添加文件上传功能。 - -Workflow 类型应用仅支持 **"图片上传"** 功能。开启后,Workflow 应用的使用页将出现图片上传入口。 - - - -**用法:** - -**对于应用使用者而言:** 已开启图片上传功能的应用的使用页将出现上传按钮,点击按钮或粘贴文件链接即可完成图片上传,你将会收到 LLM 对于图片的回答。 - -**对于应用开发者而言:** 开启文件图片上传功能后,使用者所上传的图片文件将存储在 `sys.files` 变量内。接下来添加 LLM 节点,选中具备视觉能力的大模型并在其中开启 VISION 功能,选择 `sys.files` 变量,使得 LLM 能够读取该图片文件。 - -最后在 END 节点内选择 LLM 节点的输出变量即可完成设置。 - - - LLM节点中开启视觉分析能力的设置界面 - - -### Chatflow - -Chatflow 类型应用支持以下功能: - -* **对话开场白** - - 让 AI 主动发送一段话,可以是欢迎语或 AI 的自我介绍,以拉近与使用者的距离。 -* **下一步问题建议** - - 在对话完成后,自动添加下一步问题建议,以提升对话的话题深度与频率。 -* **文字转语音** - - 在问答文字框中添加一个音频播放按钮,使用 TTS 服务(需在[模型供应商](user-guide/getting-started/readme/model-providers.md)内置)并朗读其中的文字。 -* **文件上传** - - 支持以下文件类型:文档、图片、音频、视频以及其它文件类型。开启此功能后,应用使用者可以在应用对话的过程中随时上传并更新文件。最多支持同时上传 10 个文件,每个文件的大小上限为 15MB。 - - - Chatflow应用中文件上传功能的设置界面 - - -* **引用和归属** - - 常用于配合["知识检索"](node/knowledge-retrieval.md)节点共同使用,显示 LLM 给出答复的参考源文档及归属部分。 -* **内容审查** - - 支持使用审查 API 维护敏感词库,确保 LLM 能够回应和输出安全内容,详细说明请参考[敏感内容审查](../application-orchestrate/app-toolkits/moderation-tool.md)。 - -**用法:** - -除了 **文件上传** 功能以外,Chatflow 应用内的其它功能用法较为简单,开启后可以在应用交互页直观使用。 - -本章节将主要介绍 **文件上传** 功能的具体用法: - -**对于应用使用者而言:** 已开启文件上传功能的 Chatflow 应用将会在对话框右侧出现 "回形针" 标识,点击后即可上传文件并与 LLM 交互。 - - - Chatflow应用中使用文件上传功能的界面 - - -**对于应用开发者而言:** - -开启文件上传功能后,使用者发送的文件将上传至 `sys.files` 变量内。用户在同一场对话发送新的消息后,该变量将更新。 - -根据上传的文件差异,不同类型的文件对应不同的应用编排方式。 - -* **文档文件** - -LLM 并不具备直接读取文档文件的能力,因此需要使用 [文档提取器](node/doc-extractor.md) 节点预处理 `sys.files` 变量内的文件。编排步骤如下: - -1. 开启 Features 功能,并在文件类型中仅勾选 "文档"。 -2. 在[文档提取器](node/doc-extractor.md)节点的输入变量中选中 `sys.files` 变量。 -3. 添加 LLM 节点,在系统提示词中选中文档提取器节点的输出变量。 -4. 在末尾添加 "直接回复" 节点,填写 LLM 节点的输出变量。 - -使用此方法搭建出的 Chatflow 应用无法记忆已上传的文件内容。应用使用者每次对话时都需要在聊天框中上传文档文件。如果你希望应用能够记忆已上传的文件,请参考 [《文件上传:在开始节点添加变量》](file-upload.md#fang-fa-er-zai-tian-jia-wen-jian-bian-liang)。 - - - 处理文档文件的工作流编排示意图 - - -* **图片文件** - -部分 LLM 支持直接获取图片中的信息,因此无需添加额外节点处理图片。 - -编排步骤如下: - -1. 开启 Features 功能,并在文件类型中仅勾选 "图片"。 -2. 添加 LLM 节点,启 VISION 功能并选择 `sys.files` 变量。 -3. 在末尾添加 "直接回复" 节点,填写 LLM 节点的输出变量。 - - - LLM节点中开启视觉分析能力的设置界面 - - -* **混合文件类型** - -若希望应用具备同时处理文档文件 + 图片文件的能力,需要用到 [列表操作](node/list-operator.md) 节点预处理 `sys.files` 变量内的文件,提取更加精细的变量后发送至对应的处理节点。编排步骤如下: - -1. 开启 Features 功能,并在文件类型中勾选 "图片" + "文档文件" 类型。 -2. 添加两个列表操作节点,在 "过滤" 条件中提取图片与文档变量。 -3. 提取文档文件变量,传递至 "文档提取器" 节点;提取图片文件变量,传递至 "LLM" 节点。 -4. 在末尾添加 "直接回复" 节点,填写 LLM 节点的输出变量。 - -应用使用者同时上传文档文件和图片后,文档文件自动分流至文档提取器节点,图片文件自动分流至 LLM 节点以实现对于文件的共同处理。 - - - 处理混合文件类型的工作流编排示意图 - - -* **音视频文件** - -LLM 尚未支持直接读取音视频文件,Dify 平台也尚未内置相关文件处理工具。应用开发者可以参考 [外部数据工具](../extension/api-based-extension/external-data-tool.md) 接入工具自行处理文件信息。 - diff --git a/zh-hans/user-guide/build-app/flow-app/application-publishing.mdx b/zh-hans/user-guide/build-app/flow-app/application-publishing.mdx deleted file mode 100644 index f4b10356..00000000 --- a/zh-hans/user-guide/build-app/flow-app/application-publishing.mdx +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: 应用发布 ---- - -调试完成之后点击右上角的「发布」可以将该工作流保存并快速发布成为不同类型的应用。 - -![](https://assets-docs.dify.ai/2025/03/6cd7d2105cb5a9e4f25601efbda4ffb0.png) - -对话型应用支持发布为: - -* 直接运行 -* 嵌入网站 -* 访问 API - -工作流应用支持发布为: - -* 直接运行 -* 批处理 -* 访问 API -* 发布为工具 - - -如需管理多个版本的 聊天流/工作流,请参阅 [版本管理](https://docs.dify.ai/zh-hans/guides/management/version-control)。 - diff --git a/zh-hans/user-guide/build-app/flow-app/concepts.mdx b/zh-hans/user-guide/build-app/flow-app/concepts.mdx deleted file mode 100644 index e98680d4..00000000 --- a/zh-hans/user-guide/build-app/flow-app/concepts.mdx +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: 关键概念 -version: '简体中文' ---- - -### 节点 - -**节点是工作流的关键构成**,通过连接不同功能的节点,执行工作流的一系列操作。 - -工作流的核心节点请查看[节点 - 开始](./nodes/start)。 - -*** - -### 变量 - -**变量用于串联工作流内前后节点的输入与输出**,实现流程中的复杂处理逻辑,包含系统变量、环境变量和会话变量。详细说明请参考 [变量](./variables)。 - -*** - -### Chatflow 和 Workflow - -**应用场景** - -* **Chatflow**:面向对话类情景,包括客户服务、语义搜索、以及其他需要在构建响应时进行多步逻辑的对话式应用程序。 -* **Workflow**:面向自动化和批处理情景,适合高质量翻译、数据分析、内容生成、电子邮件自动化等应用程序。 - -**使用入口** - -**可用节点差异** - -1. End 节点属于 Workflow 的结束节点,仅可在流程结束时选择。 -2. Answer 节点属于 Chatflow ,用于流式输出文本内容,并支持在流程中间步骤输出。 -3. Chatflow 内置聊天记忆(Memory),用于存储和传递多轮对话的历史消息,可在 LLM 、问题分类等节点内开启,Workflow 无 Memory 相关配置,无法开启。 -4. Chatflow 的开始节点内置变量包括:`sys.query`,`sys.files`,`sys.conversation_id`,`sys.user_id`。Workflow 的开始节点内置变量包括:`sys.files`,`sys.user_id` diff --git a/zh-hans/user-guide/build-app/flow-app/create-flow-app.mdx b/zh-hans/user-guide/build-app/flow-app/create-flow-app.mdx deleted file mode 100644 index db3db99d..00000000 --- a/zh-hans/user-guide/build-app/flow-app/create-flow-app.mdx +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: 创建应用 -version: '简体中文' ---- - -## Chatflow - -**适用场景:** - -面向对话类情景,包括客户服务、语义搜索、以及其他需要在构建响应时进行多步逻辑的对话式应用程序。该类型应用的特点在于支持对生成的结果进行多轮对话交互,调整生成的结果。 - -常见的交互路径:给出指令 → 生成内容 → 就内容进行多次讨论 → 重新生成结果 → 结束。 - -在 "工作室" 页,轻点左侧 “创建空白应用”,然后选择 “聊天助手” 中的 “工作流编排”。 - -![](/zh-cn/img/8eb5a12939c298bc7cb9a778d10d42db.png) - -## Workflow - -**适用场景:** - -面向自动化和批处理情景,适合高质量翻译、数据分析、内容生成、电子邮件自动化等应用程序。该类型应用无法对生成的结果进行多轮对话交互。 - -常见的交互路径:给出指令 → 生成内容 → 结束 - -在 "工作室" 页,轻点左侧 “创建空白应用”,然后选择 “工作流” 完成创建。 - -![](/zh-cn/user-guide/.gitbook/assets/workflow.png) - -## 两者之间的差异 - -**应用类型差异** - -1. End 节点属于 Workflow 的结束节点,仅可在流程结束时选择。 -2. Answer 节点属于 Chatflow ,用于流式输出文本内容,并支持在流程中间步骤输出。 -3. Chatflow 内置聊天记忆(Memory),用于存储和传递多轮对话的历史消息,可在 LLM 、问题分类等节点内开启,Workflow 无 Memory 相关配置,无法开启。 -4. Chatflow 的开始节点内置变量包括:`sys.query`,`sys.files`,`sys.conversation_id`,`sys.user_id`。Workflow 的开始节点内置变量包括:`sys.files`,`sys.user_id` diff --git a/zh-hans/user-guide/build-app/flow-app/file-upload.mdx b/zh-hans/user-guide/build-app/flow-app/file-upload.mdx deleted file mode 100644 index c3512840..00000000 --- a/zh-hans/user-guide/build-app/flow-app/file-upload.mdx +++ /dev/null @@ -1,181 +0,0 @@ ---- -title: 文件上传 -version: '简体中文' ---- - -相较于聊天文本,文档文件能够承载大量的信息,例如学术报告、法律合同。受限于 LLM 自身仅能够支持文件或图片,难以获取文件内更加丰富的上下文信息,应用的使用者不得不手动复制粘贴大量信息与 LLM 对话,增加了许多不必要的使用成本。 - -文件上传功能允许将文件以 File variables 的形式在工作流应用中上传、解析、引用、和下载。**开发者现可轻松构建能理解和处理图片、音频、视频的复杂工作。** - -### 应用场景 - -1. **文档分析**: 上传学术研究报告文件,LLM 可以快速总结要点,根据文件内容回答相关问题。 -2. **代码审查**: 开发者上传代码文件,获得优化建议与 bug 检测。 -3. **学习辅导**: 学生上传作业或学习资料,获得个性化的解释和指导。 -4. **法律援助**: 上传完整的合同文本,由 LLM 协助审查条款,指出潜在风险。 - -### 文件上传与知识库的区别 - -文件上传和知识库都是为 LLM 提供额外上下文信息的方式,但它们在使用场景和功能上有明显区别: - -1. **信息来源**: - * 文件上传:允许终端用户在对话过程中动态上传文件,提供即时的、个性化的上下文信息。 - * 知识库:由应用开发者预先设置和管理,包含相对固定的信息集合。 -2. **使用灵活性**: - * 文件上传:更加灵活,用户可以根据具体需求上传不同类型的文件。 - * 知识库:内容相对固定,但可以被多个会话重复利用。 -3. **信息处理**: - * 文件上传:需要通过文档提取器或其他工具将文件内容转换为 LLM 可理解的文本。 - * 知识库:通常已经过预处理和索引,可以直接进行检索。 -4. **应用场景**: - * 文件上传:适用于需要处理用户特定文档的场景,如文档分析、个性化学习辅导等。 - * 知识库:适用于需要访问大量预设信息的场景,如客户服务、产品咨询等。 -5. **数据持久性**: - * 文件上传:通常为临时使用,不会长期存储在系统中。 - * 知识库:作为应用的一部分长期存在,可以持续更新和维护。 - -### 快速开始 - -Dify 支持在 [ChatFlow](key-concept.md#chatflow-he-workflow) 和 [WorkFlow](key-concept.md#chatflow-he-workflow) 类型应用中上传文件,并通过[变量](variables.md)交由 LLM 处理。应用开发者可以参考以下方法为应用开启文件上传功能: - -* 在 Workflow 应用中: - * 在 ["开始节点"](node/start.md) 添加文件变量 -* 在 ChatFlow 应用中: - * 在 ["附加功能"](additional-features.md) 中开启文件上传,允许在聊天窗中直接上传文件 - * 在 ["开始节点"](node/start.md) 添加文件变量 - * 注意:这两种方法可以同时配置,它们是彼此独立的。附加功能中的文件上传设置(包括上传方式和数量限制)不会影响开始节点中的文件变量。例如只想通过开始节点创建文件变量,则无需开启附加功能中的文件上传功能。 - -这两种方法为应用提供了灵活的文件上传选项,以满足不同场景的需求。 - -#### File Types - -file variables 和 array[file] variables 支持以下文件类型与格式: - -| 文件类型 | 支持格式 | -|---------|---------| -| 文档 | TXT, MARKDOWN, PDF, HTML, XLSX, XLS, DOCX, CSV, EML, MSG, PPTX, PPT, XML, EPUB. | -| 图片 | JPG, JPEG, PNG, GIF, WEBP, SVG. | -| 音频 | MP3, M4A, WAV, WEBM, AMR. | -| 视频 | MP4, MOV, MPEG, MPGA. | -| 其他 | 自定义后缀名支持 | - -#### 方法一:在应用聊天框中开启文件上传(仅适用于 Chatflow) - -1. 点击 Chatflow 应用右上角的 **"功能"** 按钮即可为应用添加更多功能。 - - 开启此功能后,应用使用者可以在应用对话的过程中随时上传并更新文件。最多支持同时上传 10 个文件,每个文件的大小上限为 15MB。 - - - Chatflow应用中文件上传功能的设置界面 - - -开启该功能并不意味着赋予 LLM 直接读取文件的能力,还需要配备[**文档提取器**](node/doc-extractor.md)将文档解析为文本供 LLM 理解。 - -* 对于音频文件,可以使用 gpt-4o-audio-preview 等支持多模态输入的模型直接处理音频,无需额外的提取器。 -* 对于视频和其他文件类型,暂无对应的提取器,需要应用开发者接入[外部工具](../tools/advanced-tool-integration.md)进行处理 - -2. 添加[文档提取器](node/doc-extractor.md)节点,在输入变量中选中 `sys.files` 变量。 -3. 添加 LLM 节点,在系统提示词中选中文档提取器节点的输出变量。 -4. 在末尾添加 "直接回复" 节点,填写 LLM 节点的输出变量。 - - - 包含文件上传的工作流示意图 - - -开启后,用户可以在对话框中上传文件并进行对话。但通过此方式, LLM 应用并不具备记忆文件内容的能力,每次对话时需要上传文件。 - - - 对话框中上传文件的界面 - - -若希望 LLM 能够在对话中记忆文件内容,请参考下文。 - -#### 方法二:通过添加文件变量开启文件上传功能 - -#### 1. 在"开始"节点添加文件变量 - -在应用的["开始"](node/start.md)节点内添加输入字段,选择 **"单文件"** 或 **"文件列表"** 字段类型的变量。 - - - -* **单文件** - - 仅允许应用使用者上传单个文件。 -* **文件列表** - - 允许应用使用者单词批量上传多个文件。 - -> 为了便于操作,将使用单文件变量作为示例。 - -#### 文件解析 - -文件变量的使用方式主要分为两种: - -1. 使用工具节点转换文件内容: - * 对于文档类型的文件,可以使用"文档提取器"节点将文件内容转换为文本形式。 - * 这种方法适用于需要将文件内容解析为模型可理解的格式(如 string、array[string] 等)的情况。 -2. 直接在 LLM 节点中使用文件变量: - * 对于某些特定类型的文件(如图片),可以在 LLM 节点中直接使用文件变量。 - * 例如,对于图片类型的 file variables,可以在 LLM 节点中启用 vision 功能,然后在变量选择器中直接引用对应的文件变量。 - -选择哪种方式取决于文件类型和您的具体需求。接下来,我们将详细介绍这两种方法的具体操作步骤。 - -#### 2. 添加文档提取器节点 - -上传文件后将存储至单文件变量内,LLM 暂不支持直接读取变量中的文件。因此需要先添加 [**"文档提取器"**](node/doc-extractor.md)节点,从已上传的文档文件内提取内容并发送至 LLM 节点完成信息处理。 - -将"开始"节点内的文件变量作为 **"文档提取器"** 节点的输入变量。 - - - 文档提取器节点添加输入变量的界面 - - -将"文档提取器"节点的输出变量填写至 LLM 节点的系统提示词内。 - - - LLM节点中粘贴系统提示词的界面 - - -完成上述设置后,应用的使用者可以在 WebApp 内粘贴文件 URL 或上传本地文件,然后就文档内容与 LLM 展开互动。应用使用者可以在对话过程中随时替换文件,LLM 将获取最新的文件内容。 - - - WebApp中粘贴URL进行对话的界面 - - -**在 LLM 节点中引用文件变量** - -对于某些特定类型的文件(如图片),可以在 LLM 节点中直接使用文件变量。这种方法特别适用于需要视觉分析的场景。以下是具体步骤: - -1. 在 LLM 节点中,启用 vision 功能。这允许模型处理图像输入(模型需要支持 vision)。 -2. 在 LLM 节点的变量选择器中,直接引用之前创建的文件变量如果是通过附加功能开启的文件上传,则选择 `sys.files` 变量。 -3. 在系统提示词中,指导模型如何处理图像输入。例如,你可以要求模型描述图像内容或回答关于图像的问题。 - -下面是一个示例配置: - - - LLM节点中直接使用文件变量的配置界面 - - -需要注意的是,直接在 LLM 节点中使用文件变量时,我们需要确保文件变量仅包含图片文件,否则可能会导致错误。如果用户可能上传不同类型的文件,我们需要使用列表操作来进行过滤。 - -#### 文件下载 - -将文件变量放置到 answer 节点或者 end 节点中,当应用运行到该节点都时候将会在会话框中提供文件下载卡片。点击卡片即可下载文件。 - - - 会话框中的文件下载卡片界面 - - -### 进阶使用 - -若希望应用能够支持上传多种文件,例如允许用户同时上传文档文件、图片和音视频文件,此时需要在 "开始节点" 中添加 "文件列表" 变量,并通过"列表操作"节点针对不同的文件类型进行处理。详细说明请参考[列表操作](node/list-operator.md)节点。 - - - 处理多种文件类型的工作流示意图 - diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/README.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/README.mdx deleted file mode 100644 index 836abd41..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/README.mdx +++ /dev/null @@ -1,7 +0,0 @@ -# 节点说明 - -**节点是工作流中的关键构成**,通过连接不同功能的节点,执行工作流的一系列操作。 - -### 核心节点 - -
开始(Start)定义一个 workflow 流程启动的初始参数。
结束(End)定义一个 workflow 流程结束的最终输出内容。
回复(Answer)定义一个 Chatflow 流程中的回复内容。
大语言模型(LLM)调用大语言模型回答问题或者对自然语言进行处理。
知识检索(Knowledge Retrieval)从知识库中检索与用户问题相关的文本内容,可作为下游 LLM 节点的上下文。
问题分类(Question Classifier)通过定义分类描述,LLM 能够根据用户输入选择与之相匹配的分类。
条件分支(IF/ELSE)允许你根据 if/else 条件将 workflow 拆分成两个分支。
代码执行(Code)运行 Python / NodeJS 代码以在工作流程中执行数据转换等自定义逻辑。
模板转换(Template)允许借助 Jinja2 的 Python 模板语言灵活地进行数据转换、文本处理等。
变量聚合(Variable Aggregator)将多路分支的变量聚合为一个变量,以实现下游节点统一配置。
参数提取器(Parameter Extractor)利用 LLM 从自然语言推理并提取结构化参数,用于后置的工具调用或 HTTP 请求。
迭代(Iteration)对列表对象执行多次步骤直至输出所有结果。
HTTP 请求(HTTP Request)允许通过 HTTP 协议发送服务器请求,适用于获取外部检索结果、webhook、生成图片等情景。
工具(Tools)允许在工作流内调用 Dify 内置工具、自定义工具、子工作流等。
变量赋值(Variable Assigner)变量赋值节点用于向可写入变量(例如会话变量)进行变量赋值。
diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/answer.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/answer.mdx deleted file mode 100644 index fe80c088..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/answer.mdx +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: 直接回复 -version: '简体中文' ---- - -### 定义 - -定义一个 Chatflow 流程中的回复内容。 - -你可以在文本编辑器中自由定义回复格式,包括自定义一段固定的文本内容、使用前置步骤中的输出变量作为回复内容、或者将自定义文本与变量组合后回复。 - -可随时加入节点将内容流式输出至对话回复,支持所见即所得配置模式并支持图文混排,如: - -1. 输出 LLM 节点回复内容 -2. 输出生成图片 -3. 输出纯文本 - - -直接回复节点可以不作为最终的输出节点,作为流程过程节点时,可以在中间步骤流式输出结果。 - - diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/code.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/code.mdx deleted file mode 100644 index 01dd980a..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/code.mdx +++ /dev/null @@ -1,101 +0,0 @@ ---- -title: 代码 -version: '简体中文' ---- - -## 目录 - -* [介绍](code.md#介绍) -* [使用场景](code.md#使用场景) -* [本地部署](code.md#本地部署) -* [安全策略](code.md#安全策略) - -## 介绍 - -代码节点支持运行 Python / NodeJS 代码以在工作流程中执行数据转换。它可以简化您的工作流程,适用于Arithmetic、JSON transform、文本处理等情景。 - -该节点极大地增强了开发人员的灵活性,使他们能够在工作流程中嵌入自定义的 Python 或 Javascript 脚本,并以预设节点无法达到的方式操作变量。通过配置选项,你可以指明所需的输入和输出变量,并撰写相应的执行代码: - -![](/zh-cn/img/904eeba8ee23e89c189497c5fd90f499.png) - -## 配置 - -如果您需要在代码节点中使用其他节点的变量,您需要在`输入变量`中定义变量名,并引用这些变量,可以参考[变量引用](../key-concept.md#变量)。 - -## 使用场景 - -使用代码节点,您可以完成以下常见的操作: - -### 结构化数据处理 - -在工作流中,经常要面对非结构化的数据处理,如JSON字符串的解析、提取、转换等。最典型的例子就是HTTP节点的数据处理,在常见的API返回结构中,数据可能会被嵌套在多层JSON对象中,而我们需要提取其中的某些字段。代码节点可以帮助您完成这些操作,下面是一个简单的例子,它从HTTP节点返回的JSON字符串中提取了`data.name`字段: - -```python -def main(http_response: str) -> str: - import json - data = json.loads(http_response) - return { - # 注意在输出变量中声明result - 'result': data['data']['name'] - } -``` - -### 数学计算 - -当工作流中需要进行一些复杂的数学计算时,也可以使用代码节点。例如,计算一个复杂的数学公式,或者对数据进行一些统计分析。下面是一个简单的例子,它计算了一个数组的平方差: - -```python -def main(x: list) -> float: - return { - # 注意在输出变量中声明result - 'result' : sum([(i - sum(x) / len(x)) ** 2 for i in x]) / len(x) - } -``` - -### 拼接数据 - -有时,也许您需要拼接多个数据源,如多个知识检索、数据搜索、API调用等,代码节点可以帮助您将这些数据源整合在一起。下面是一个简单的例子,它将两个知识库的数据合并在一起: - -```python -def main(knowledge1: list, knowledge2: list) -> list: - return { - # 注意在输出变量中声明result - 'result': knowledge1 + knowledge2 - } -``` - -## 本地部署 - -如果您是本地部署的用户,您需要启动一个沙盒服务,它会确保恶意代码不会被执行,同时,启动该服务需要使用Docker服务,您可以在[这里](https://github.com/langgenius/dify/tree/main/docker/docker-compose.middleware.yaml)找到Sandbox服务的具体信息,您也可以直接通过`docker-compose`启动服务: - -```bash -docker-compose -f docker-compose.middleware.yaml up -d -``` - -> 如果您的系统安装了 Docker Compose V2 而不是 V1,请使用 `docker compose` 而不是 `docker-compose`。通过`$ docker compose version`检查这是否为情况。在[这里](https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command)阅读更多信息。 - -## 安全策略 - -无论是 Python3 还是 Javascript 代码执行器,它们的执行环境都被严格隔离(沙箱化),以确保安全性。这意味着开发者不能使用那些消耗大量系统资源或可能引发安全问题的功能,例如直接访问文件系统、进行网络请求或执行操作系统级别的命令。这些限制保证了代码的安全执行,同时避免了对系统资源的过度消耗。 - -### 常见问题 - -**在代码节点内填写代码后为什么无法保存?** - -请检查代码是否包含危险行为。例如: - -```python -def main() -> dict: - return { - "result": open("/etc/passwd").read(), - } -``` - -这段代码包含以下问题: - -* **未经授权的文件访问:** 代码试图读取 "/etc/passwd" 文件,这是 Unix/Linux 系统中存储用户账户信息的关键系统文件。 -* **敏感信息泄露:** "/etc/passwd" 文件包含系统用户的重要信息,如用户名、用户 ID、组 ID、home 目录路径等。直接访问可能会导致信息泄露。 - -危险代码将会被 Cloudflare WAF 自动拦截,你可以通过 “网页调试工具” 中的 “网络” 查看是否被拦截。 - -![](/zh-cn/img/7d29e700be8e5b6dfb91ff1263624368.png) diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/doc-extractor.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/doc-extractor.mdx deleted file mode 100644 index f9429894..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/doc-extractor.mdx +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: 文档提取器 -version: '简体中文' ---- - -### 定义 - -LLM 自身无法直接读取或解释文档的内容。因此需要将用户上传的文档,通过文档提取器节点解析并读取文档文件中的信息,转化文本之后再将内容传给 LLM 以实现对于文件内容的处理。 - -### 应用场景 - -* 构建能够与文件进行互动的 LLM 应用,例如 ChatPDF 或 ChatWord; -* 分析并检查用户上传的文件内容; - -### 节点功能 - -文档提取器节点可以理解为一个信息处理中心,通过识别并读取输入变量中的文件,提取信息后并转化为 string 类型输出变量,供下游节点调用。 - -![](/zh-cn/img/0cbfd9c1d9d56e6a528b134bd07662ec.png) - -文档提取器节点结构分为输入变量、输出变量。 - -#### 输入变量 - -文档提取器仅接受以下数据结构的变量: - -* `File`,单独一个文件 -* `Array[File]`,多个文件 - -文档提取器仅能够提取文档类型文件中的信息,例如 TXT、Markdown、PDF、HTML、DOCX 格式文件的内容,无法处理图片、音频、视频等格式文件。 - -#### 输出变量 - -输出变量固定命名为 text。输出的变量类型取决于输入变量: - -* 输入变量为 `File`,输出变量为 `string` -* 输入变量为 `Array[File]`,输出变量为 `array[string]` - -> Array 数组变量一般需配合列表操作节点使用,详细说明请参考 [list-operator.md](list-operator.md "mention")。 - -### 配置示例 - -在一个典型的文件交互问答场景中,文档提取器可以作为 LLM 节点的前置步骤,提取应用的文件信息并传递至下游的 LLM 节点,回答用户关于文件的问题。 - -本章节将通过一个典型的 ChatPDF 示例工作流模板,介绍文档提取器节点的使用方法。 - -![](/zh-cn/img/46a1f939088176a76e843422360ea948.png) - -**配置流程:** - -1. 为应用开启文件上传功能。在 [“开始”](start.md) 节点中添加**单文件变量**并命名为 `pdf`。 -2. 添加文档提取节点,并在输入变量内选中 `pdf` 变量。 -3. 添加 LLM 节点,在系统提示词内选中文档提取器节点的输出变量。LLM 可以通过该输出变量读取文件中的内容。 - -![](/zh-cn/img/4f307d5ece35155a24eac7013766f9ee.png) - -4\. 配置结束节点,在结束节点中选择 LLM 节点的输出变量。 - -配置完成后,应用将具备文件上传功能,使用者可以上传 PDF 文件并展开对话。 - -![](/zh-cn/img/0ae3f13cf725cb2c52c72cc354e592ee.png) - - -如需了解如何在聊天对话中上传文件并与 LLM 互动,请参考 [附加功能](../additional-features.md)。 - diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/end.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/end.mdx deleted file mode 100644 index 1a2862f8..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/end.mdx +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: 结束 -version: '简体中文' ---- - -### 1 定义 - -定义一个工作流程结束的最终输出内容。每一个工作流在完整执行后都需要至少一个结束节点,用于输出完整执行的最终结果。 - -结束节点为流程终止节点,后面无法再添加其他节点,工作流应用中只有运行到结束节点才会输出执行结果。若流程中出现条件分叉,则需要定义多个结束节点。 - -结束节点需要声明一个或多个输出变量,声明时可以引用任意上游节点的输出变量。 - - -*** - -### 2 场景 - -在以下[长故事生成工作流](iteration.md#shi-li-2-chang-wen-zhang-die-dai-sheng-cheng-qi-ling-yi-zhong-bian-pai-fang-shi)中,结束节点声明的变量 `Output` 为上游代码节点的输出,即该工作流会在 Code3 节点执行完成之后结束,并输出 Code3 的执行结果。 - -![](/zh-cn/img/a686f00a80c78f84d88e01712f880386.png) - -**单路执行示例:** - -![](/zh-cn/img/463291f9ab3d62ff3b4094e97d8ea543.png) - -**多路执行示例:** - -![](/zh-cn/img/6dfcb4a8e25ae2132c663299eef596ea.png) - diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/http-request.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/http-request.mdx deleted file mode 100644 index 43faaa75..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/http-request.mdx +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: HTTP 请求 -version: '简体中文' ---- - -### 1 定义 - -允许通过 HTTP 协议发送服务器请求,适用于获取外部数据、webhook、生成图片、下载文件等情景。它让你能够向指定的网络地址发送定制化的 HTTP 请求,实现与各种外部服务的互联互通。 - -该节点支持常见的 HTTP 请求方法: - -* **GET**,用于请求服务器发送某个资源。 -* **POST**,用于向服务器提交数据,通常用于提交表单或上传文件。 -* **HEAD**,类似于 GET 请求,但服务器不返回请求的资源主体,只返回响应头。 -* **PATCH**,用于在请求-响应链上的每个节点获取传输路径。 -* **PUT**,用于向服务器上传资源,通常用于更新已存在的资源或创建新的资源。 -* **DELETE**,用于请求服务器删除指定的资源。 - -你可以通过配置 HTTP 请求的包括 URL、请求头、查询参数、请求体内容以及认证信息等。 - - - 长故事迭代生成应用流程图 - - - -*** - -### 2 场景 - -这个节点的一个实用特性是能够根据场景需要,在请求的不同部分动态插入变量。比如在处理客户评价请求时,你可以将用户名或客户ID、评价内容等变量嵌入到请求中,以定制化自动回复信息或获取特定客户信息并发送相关资源至特定的服务器。 - - - 长故事迭代生成应用流程图 - - -HTTP 请求的返回值包括响应体、状态码、响应头和文件。值得注意的是,如果响应中包含了文件(目前仅支持图片类型),这个节点能够自动将文件保存下来,供流程后续步骤使用。这样的设计不仅提高了处理效率,也使得处理带有文件的响应变得简单直接。 diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/ifelse.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/ifelse.mdx deleted file mode 100644 index 680f836a..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/ifelse.mdx +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: 条件分支 -version: '简体中文' ---- - -### 定义 - -根据 If/else/elif 条件将 Chatflow / Workflow 流程拆分成多个分支。 - -### 节点功能 - -条件分支的运行机制包含以下六个路径: - -* IF 条件:选择变量,设置条件和满足条件的值; -* IF 条件判断为 `True`,执行 IF 路径; -* IF 条件判断为 `False`,执行 ELSE 路径; -* ELIF 条件判断为 `True`,执行 ELIF 路径; -* ELIF 条件判断为 `False`,继续判断下一个 ELIF 路径或执行最后的 ELSE 路径; - -**条件类型** - -支持设置以下条件类型: - -* 包含(Contains) -* 不包含(Not contains) -* 开始是(Start with) -* 结束是(End with) -* 是(Is) -* 不是(Is not) -* 为空(Is empty) -* 不为空(Is not empty) - -*** - -### 场景 - - - - - -以**文本总结工作流**作为示例说明各个条件: - -* IF 条件: 选择开始节点中的 `summarystyle` 变量,条件为**包含** `技术`; -* IF 条件判断为 `True`,执行 IF 路径,通过知识检索节点查询技术相关知识再到 LLM 节点回复(图中上半部分); -* IF 条件判断为 `False`,但添加了 `ELIF` 条件,即 `summarystyle` 变量输入**不包含**`技术`,但 `ELIF` 条件内包含 `科技`,会检查 `ELIF` 内的条件是否为 `True`,然后执行路径内定义的步骤; -* `ELIF` 内的条件为 `False`,即输入变量既不不包含 `技术`,也不包含 `科技`,继续判断下一个 ELIF 路径或执行最后的 ELSE 路径; -* IF 条件判断为 `False`,即 `summarystyle` 变量输入**不包含** `技术`,执行 ELSE 路径,通过 LLM2 节点进行回复(图中下半部分); - -**多重条件判断** - -涉及复杂的条件判断时,可以设置多重条件判断,在条件之间设置 **AND** 或者 **OR**,即在条件之间取**交集**或者**并集**。 - - - - diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/iteration.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/iteration.mdx deleted file mode 100644 index e9e00184..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/iteration.mdx +++ /dev/null @@ -1,186 +0,0 @@ ---- -title: 迭代 -version: '简体中文' ---- - -### 定义 - -对数组执行多次步骤直至输出所有结果。 - -迭代步骤在列表中的每个条目(item)上执行相同的步骤。使用迭代的条件是确保输入值已经格式化为列表对象。迭代节点允许 AI 工作流处理更复杂的处理逻辑,迭代节点是循环节点的友好版本,它在自定义程度上做出了一些妥协,以便非技术用户能够快速入门。 - -*** - -### 场景 - -#### **示例1:长文章迭代生成器** - - - 长故事生成器流程图 - - -1. 在 **开始节点** 内输入故事标题和大纲 -2. 使用 **LLM 节点=** 基于用户输入的故事标题和大纲,让 LLM 开始编写内容 -3. 使用 **参数提取节点** 将 LLM 输出的完整内容转换成数组格式 -4. 通过 **迭代节点** 包裹的 **LLM 节点** 循环多次生成各章节内容 -5. 将 **直接回复** 节点添加在迭代节点内部,实现在每轮迭代生成之后流式输出 - -**具体配置步骤** - -1. 在 **开始节点** 配置故事标题(title)和大纲(outline); - - - 开始节点配置界面 - - -2. 选择 **LLM 节点** 基于用户输入的故事标题和大纲,让 LLM 开始编写文本; - - - LLM节点配置界面 - - -3. 选择 **参数提取节点**,将故事文本转换成为数组(Array)结构。提取参数为 `sections` ,参数类型为 `Array[Object]` - - - 参数提取节点配置界面 - - -> 参数提取效果受模型推理能力和指令影响,使用推理能力更强的模型,在**指令**内增加示例可以提高参数提取的效果。 - -4. 将数组格式的故事大纲作为迭代节点的输入,在迭代节点内部使用 **LLM 节点** 进行处理 - - - 迭代节点配置界面 - - -在 LLM 节点内配置输入变量 `GenerateOverallOutline/output` 和 `Iteration/item` - - - LLM节点内部配置界面 - - -> 迭代的内置变量:`items[object]` 和 `index[number]` -> -> `items[object] 代表以每轮迭代的输入条目;` -> -> `index[number] 代表当前迭代的轮次;` - -5. 在迭代节点内部配置 **直接回复节点** ,可以实现在每轮迭代生成之后流式输出。 - - - 直接回复节点配置界面 - - -6. 完整调试和预览 - - - 完整调试和预览界面 - - -#### **示例 2:长文章迭代生成器(另一种编排方式)** - - - 长文章迭代生成器另一种编排方式流程图 - - -* 在 **开始节点** 内输入故事标题和大纲 -* 使用 **LLM 节点** 生成文章小标题,以及小标题对应的内容 -* 使用 **代码节点** 将完整内容转换成数组格式 -* 通过 **迭代节点** 包裹的 **LLM 节点** 循环多次生成各章节内容 -* 使用 **模板转换** 节点将迭代节点输出的字符串数组转换为字符串 -* 在最后添加 **直接回复节点** 将转换后的字符串直接输出 - -### 什么是数组内容 - -列表是一种特定的数据类型,其中的元素用逗号分隔,以 `[` 开头,以 `]` 结尾。例如: - -**数字型:** - -``` -[0,1,2,3,4,5] -``` - -**字符串型:** - -``` -["monday", "Tuesday", "Wednesday", "Thursday"] -``` - -**JSON 对象:** - -``` -[ - { - "name": "Alice", - "age": 30, - "email": "alice@example.com" - }, - { - "name": "Bob", - "age": 25, - "email": "bob@example.com" - }, - { - "name": "Charlie", - "age": 35, - "email": "charlie@example.com" - } -] -``` - -*** - -### 支持返回数组的节点 - -* 代码节点 -* 参数提取 -* 知识库检索 -* 迭代 -* 工具 -* HTTP 请求 - -### 如何获取数组格式的内容 - -**使用 CODE 节点返回** - - - CODE节点返回数组格式内容界面 - - -**使用 参数提取 节点返回** - - - 参数提取节点返回数组格式内容界面 - - -### 如何将数组转换为文本 - -迭代节点的输出变量为数组格式,无法直接输出。你可以使用一个简单的步骤将数组转换回文本。 - -**使用代码节点转换** - - - 使用代码节点将数组转换为文本界面 - - -代码示例: - -```python -def main(articleSections: list): - data = articleSections - return { - "result": "\n".join(data) - } -``` - -**使用模板节点转换** - - - 使用模板节点将数组转换为文本界面 - - -代码示例: - -```django -{{ articleSections | join("\n") }} -``` \ No newline at end of file diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/knowledge-retrieval.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/knowledge-retrieval.mdx deleted file mode 100644 index 0dc9cb59..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/knowledge-retrieval.mdx +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: 知识检索 -version: '简体中文' ---- - - -### 定义 - -从知识库中检索与用户问题相关的文本内容,可作为下游 LLM 节点的上下文来使用。 - -*** - -### 应用场景 - -常见情景:构建基于外部数据/知识的 AI 问答系统(RAG)。了解更多关于 RAG 的[基本概念](../../../learn-more/extended-reading/retrieval-augment/)。 - -下图为一个最基础的知识库问答应用示例,该流程的执行逻辑为:知识库检索作为 LLM 节点的前置步骤,在用户问题传递至 LLM 节点之前,先在知识检索节点内将匹配用户问题最相关的文本内容并召回,随后在 LLM 节点内将用户问题与检索到的上下文一同作为输入,让 LLM 根据检索内容来回复问题。 - -![知识库问答应用示例](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/workflow/node/34eefebfe8737186d89cc3cf2662a99c.png) - -*** - -### 配置指引 - -**配置流程:** - -1. 选择查询变量。查询变量通常代表用户输入的问题,该变量可以作为输入项并检索知识库中的相关文本分段。在常见的对话类应用中一般将开始节点的 `sys.query` 作为查询变量,知识库所能接受的最大查询内容为 200 字符; -2. 选择需要查询的知识库,可选知识库需要在 Dify 知识库内预先[创建](../../knowledge-base/create-knowledge-and-upload-documents/); -3. 在 元数据筛选 板块中配置元数据的筛选条件,使用元数据功能筛选知识库内的文档。详情请参阅[在应用内集成知识库](https://docs.dify.ai/zh-hans/guides/knowledge-base/integrate-knowledge-within-application)中的 **使用元数据筛选知识** 章节。 -4. 指定[召回模式](../../../learn-more/extended-reading/retrieval-augment/retrieval.md)。自 9 月 1 日后,知识库的召回模式将自动切换为多路召回,不再建议使用 N 选 1 召回模式; -5. 连接并配置下游节点,一般为 LLM 节点; - -![知识检索配置](https://assets-docs.dify.ai/2025/03/f33b9a3ff1c9468fb5d7c1de4c1e02d6.png) - -**输出变量** - -![输出变量](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/workflow/node/ca3d688cb8644b6e0e1f7ce54256ee34.png) - -知识检索的输出变量 `result` 为从知识库中检索到的相关文本分段。其变量数据结构中包含了分段内容、标题、链接、图标、元数据信息。 - -**配置下游节点** - -在常见的对话类应用中,知识库检索的下游节点一般为 LLM 节点,知识检索的**输出变量** `result` 需要配置在 LLM 节点中的 **上下文变量** 内关联赋值。关联后你可以在提示词的合适位置插入 **上下文变量**。 - - -上下文变量是 LLM 节点内定义的特殊变量类型,用于在提示词内插入外部检索的文本内容。 - - -当用户提问时,若在知识检索中召回了相关文本,文本内容会作为上下文变量中的值填入提示词,提供 LLM 回复问题;若未在知识库检索中召回相关的文本,上下文变量值为空,LLM 则会直接回复用户问题。 - -![配置下游 LLM 节点](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/workflow/node/a9cb49ad4b8e0dee1f8bb20f49d1fe79.png) - -该变量除了可以作为 LLM 回复问题时的提示词上下文作为外部知识参考引用,另外由于其数据结构中包含了分段引用信息,同时可以支持应用端的 [**引用与归属**](../../knowledge-base/retrieval-test-and-citation.md#id-2-yin-yong-yu-gui-shu) 功能。 diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/list-operator.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/list-operator.mdx deleted file mode 100644 index 1675ec02..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/list-operator.mdx +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: 列表操作 -version: '简体中文' ---- - -文件列表变量支持同时上传文档文件、图片、音频与视频文件等多种文件。应用使用者在上传文件时,所有文件都存储在同一个 `Array[File]` 数组类型变量内,**不利于后续单独处理文件。** - -> `Array`数据类型意味着该变量的实际值可能为 \[1.mp3, 2.png, 3.doc],LLM 仅支持读取图片文件或文本内容等单一值作为输入变量,无法直接读取数组变量。 - -### 节点功能 - -列表过滤器可以对文件的格式类型、文件名、大小等属性进行过滤与提取,将不同格式的文件传递给对应的处理节点,以实现对不同文件处理流的精确控制。 - -例如在一个应用中,允许用户同时上传文档文件和图片文件。不同的文件需要通过**列表操作节点**进行分拣,将不同的文件交由不同的流程处理。 - -![](/zh-cn/img/43ab9a37624bdba10ff37a5b9004db7f.png) - -列表操作节点一般用于提取数组变量中的信息,通过设置条件将其转化为能够被下游节点所接受的变量类型。它的结构分为输入变量、过滤条件、排序、取前 N 项、输出变量。 - -![](/zh-cn/img/faaffe7d19591aeb9983629b94f53a66.png) - -#### 输入变量 - -列表操作节点仅接受以下数据结构变量: - -* Array\[string] -* Array\[number] -* Array\[file] - -#### 过滤条件 - -处理输入变量中的数组,添加过滤条件。从数组中分拣所有满足条件的数组变量,可以理解为对变量的属性进行过滤。 - -举例:文件中可能包含多重维度的属性,例如文件名、文件类型、文件大小等属性。过滤条件允许设置筛选条件,选择并提取数组变量中的特定文件。 - -支持提取以下变量: - -* type 文件类别,包含图片,文档,音频和视频类型 -* size 文件大小 -* name 文件名 -* url 指的是应用使用者通过 URL 上传的文件,可填写完整 URL 进行筛选 -* extension 文件拓展名 -* mime\_type - - [MIME 类型](https://datatracker.ietf.org/doc/html/rfc2046)是用来标识文件内容类型的标准化字符串。示例: "text/html" 表示 HTML 文档。 -* transfer\_method - - 文件上传方式,分为本地上传或通过 URL 上传 - -#### 排序 - -提供对于输入变量中数组的排序能力,支持根据文件属性进行排序。 - -* 升序排列 - - 默认排序选项,按照从小到大排序。对于字母和文本,按字母表顺序排列(A - Z) -* 降序排列 - - 由大到小排序,对于字母和文本,按字母表倒序排列(Z - A) - -该选项常用于配合输出变量中的 first\_record 及 last\_record 使用。 - -#### 取前 N 项 - -可以在 1-20 中选值,用途是为了选中数组变量的前 n 项。 - -#### 输出变量 - -满足各项过滤条件的数组元素。过滤条件、排序和限制可以单独开启。如果同时开启,则返回符合所有条件的数组元素。 - -* Result,过滤结果,数据类型为数组变量。若数组中仅包含 1 个文件,则输出变量仅包含 1 个数组元素; -* first\_record,筛选完的数组的首元素,即 result\[0]; -* last\_record,筛选完的数组的尾元素,即 result\[array.length-1]。 - -*** - -### 配置案例 - -在文件交互问答场景中,应用使用者可能会同时上传文档文件或图片文件。LLM 仅支持识别图片文件的能力,不支持读取文档文件。此时需要用到 [列表操作](list-operator.md) 节点预处理文件变量的数组,将不同的文件类型并发送至对应的处理节点。编排步骤如下: - -1. 开启 [Features](../additional-features.md) 功能,并在文件类型中勾选 “图片” + “文档文件” 类型。 -2. 添加两个列表操作节点,在 “过滤” 条件中分别设置提取图片与文档变量。 -3. 提取文档文件变量,传递至 “文档提取器” 节点;提取图片文件变量,传递至 “LLM” 节点。 -4. 在末尾添加 “直接回复” 节点,填写 LLM 节点的输出变量。 - -![](/zh-cn/img/1eb561e0c60741f668a93533ed98bfb5.png) - -应用使用者同时上传文档文件和图片后,文档文件自动分流至文档提取器节点,图片文件自动分流至 LLM 节点以实现对于混合文件的共同处理。 diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/llm.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/llm.mdx deleted file mode 100644 index 538b7721..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/llm.mdx +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: LLM -version: '简体中文' ---- - -### 定义 - -调用大语言模型的能力,处理用户在 "开始" 节点中输入的信息(自然语言、上传的文件或图片),给出有效的回应信息。 - - - LLM 节点 - - -*** - -### 应用场景 - -LLM 节点是 Chatflow/Workflow 的核心节点。该节点能够利用大语言模型的对话/生成/分类/处理等能力,根据给定的提示词处理广泛的任务类型,并能够在工作流的不同环节使用。 - -* **意图识别**,在客服对话情景中,对用户问题进行意图识别和分类,导向下游不同的流程。 -* **文本生成**,在文章生成情景中,作为内容生成的节点,根据主题、关键词生成符合的文本内容。 -* **内容分类**,在邮件批处理情景中,对邮件的类型进行自动化分类,如咨询/投诉/垃圾邮件。 -* **文本转换**,在文本翻译情景中,将用户提供的文本内容翻译成指定语言。 -* **代码生成**,在辅助编程情景中,根据用户的要求生成指定的业务代码,编写测试用例。 -* **RAG**,在知识库问答情景中,将检索到的相关知识和用户问题重新组织回复问题。 -* **图片理解**,使用 vision 能力的多模态模型,能对图像内的信息进行理解和问答。 - -选择合适的模型,编写提示词,你可以在 Chatflow/Workflow 中构建出强大、可靠的解决方案。 - -*** - -### 配置示例 - -在应用编辑页中,点击鼠标右键或轻点上一节点末尾的 + 号,添加节点并选择 LLM。 - - - LLM 节点配置-选择模型 - - -**配置步骤:** - -1. **选择模型**,Dify 提供了全球主流模型的[支持](user-guide/getting-started/readme/model-providers.md),包括 OpenAI 的 GPT 系列、Anthropic 的 Claude 系列、Google 的 Gemini 系列等,选择一个模型取决于其推理能力、成本、响应速度、上下文窗口等因素,你需要根据场景需求和任务类型选择合适的模型。 - -> 如果你是初次使用 Dify ,在 LLM 节点选择模型之前,需要在 **系统设置—模型供应商** 内提前完成[模型配置](../../model-configuration/)。 - -2. **配置模型参数**,模型参数用于控制模型的生成结果,例如温度、TopP,最大标记、回复格式等,为了方便选择系统同时提供了 3 套预设参数:创意,平衡和精确。如果你对以上参数并不熟悉,建议选择默认设置。若希望应用具备图片分析能力,请选择具备视觉能力的模型。 -3. **填写上下文(可选)**,上下文可以理解为向 LLM 提供的背景信息,常用于填写[知识检索](knowledge-retrieval.md)的输出变量。 -4. **编写提示词**,LLM 节点提供了一个易用的提示词编排页面,选择聊天模型或补全模型,会显示不同的提示词编排结构。如果选择聊天模型(Chat model),你可以自定义系统提示词(SYSTEM)/用户(USER)/ 助手(ASSISTANT)三部分内容。 - - - 编写提示词 - - -如果在编写系统提示词(SYSTEM)时没有好的思路,也可以使用提示生成器功能,借助 AI 能力快速生成适合实际业务场景的提示词。 - - - 提示生成器 - - -在提示词编辑器中,你可以通过输入 **"/"** 呼出 **变量插入菜单**,将 **特殊变量块** 或者 **上游节点变量** 插入到提示词中作为上下文内容。 - - - 呼出变量插入菜单 - - -5. **高级设置**,可以开关记忆功能并设置记忆窗口、开关 Vision 功能或者使用 Jinja-2 模版语言来进行更复杂的提示词等。 - -*** - -### 特殊变量说明 - -**上下文变量** - -上下文变量是一种特殊变量类型,用于向 LLM 提供背景信息,常用于在知识检索场景下使用。详细说明请参考[知识检索节点](knowledge-retrieval.md)。 - -**图片文件变量** - -具备视觉能力的 LLM 可以通过变量读取应用使用者所上传的图片。开启 VISION 后,选择图片文件的输出变量完成设置。 - - - 视觉上传功能 - - -**会话历史** - -为了在文本补全类模型(例如 gpt-3.5-turbo-Instruct)内实现聊天型应用的对话记忆,Dify 在原[提示词专家模式(已下线)](user-guide/learn-more/extended-reading/prompt-engineering/prompt-engineering-1/)内设计了会话历史变量,该变量沿用至 Chatflow 的 LLM 节点内,用于在提示词中插入 AI 与用户之间的聊天历史,帮助 LLM 理解对话上文。 - -> 会话历史变量应用并不广泛,仅在 Chatflow 中选择文本补全类模型时可以插入使用。 - - - 插入会话历史变量 - - -**模型参数** - -模型的参数会影响模型的输出效果。不同模型的参数会有所区别。下图为`gpt-4`的参数列表。 - - - 模型参数列表 - - -主要的参数名词解释如下: - -* **温度:** 通常是0-1的一个值,控制随机性。温度越接近0,结果越确定和重复,温度越接近1,结果越随机。 -* **Top P:** 控制结果的多样性。模型根据概率从候选词中选择,确保累积概率不超过预设的阈值P。 -* **存在惩罚:** 用于减少重复生成同一实体或信息,通过对已经生成的内容施加惩罚,使模型倾向于生成新的或不同的内容。参数值增加时,对于已经生成过的内容,模型在后续生成中被施加更大的惩罚,生成重复内容的可能性越低。 -* **频率惩罚:** 对过于频繁出现的词或短语施加惩罚,通过降低这些词的生成概率。随着参数值的增加,对频繁出现的词或短语施加更大的惩罚。较高的参数值会减少这些词的出现频率,从而增加文本的词汇多样性。 - -如果你不理解这些参数是什么,可以选择**加载预设**,从创意、平衡、精确三种预设中选择。 - - - 加载预设参数 - - -*** - -### 高级功能 - -**记忆:** 开启记忆后问题分类器的每次输入将包含对话中的聊天历史,以帮助 LLM 理解上文,提高对话交互中的问题理解能力。 - -**记忆窗口:** 记忆窗口关闭时,系统会根据模型上下文窗口动态过滤聊天历史的传递数量;打开时用户可以精确控制聊天历史的传递数量(对数)。 - -**对话角色名设置:** 由于模型在训练阶段的差异,不同模型对于角色名的指令遵循程度不同,如 Human/Assistant,Human/AI,人类/助手等等。为适配多模型的提示响应效果,系统提供了对话角色名的设置,修改对话角色名将会修改会话历史的角色前缀。 - -**Jinja-2 模板:** LLM 的提示词编辑器内支持 Jinja-2 模板语言,允许你借助 Jinja2 这一强大的 Python 模板语言,实现轻量级数据转换和逻辑处理,参考[官方文档](https://jinja.palletsprojects.com/en/3.1.x/templates/)。 - -*** - -### 使用案例 - -* **读取知识库内容** - -想要让工作流应用具备读取 ["知识库"](/zh-cn/user-guide/knowledge-base/) 内容的能力,例如搭建智能客服应用,请参考以下步骤: - -1. 在 LLM 节点上游添加知识库检索节点; -2. 将知识检索节点的 **输出变量** `result` 填写至 LLM 节点中的 **上下文变量** 内; -3. 将 **上下文变量** 插入至应用提示词内,赋予 LLM 读取知识库内的文本能力。 - - - 上下文变量 - - -[知识检索节点](knowledge-retrieval.md)输出的变量 `result` 还包含了分段引用信息,你可以通过 [**引用与归属**](../../knowledge-base/retrieval-test-and-citation.md#id-2-yin-yong-yu-gui-shu) 功能查看信息来源。 - -> 上游节点的普通变量同样可以填写至上下文变量内,例如开始节点的字符串类型变量,但 **引用与归属** 功能将会失效。 - -* **读取文档文件** - -想要让工作流应用具备读取读取文档内容的能力,例如搭建 ChatPDF 应用,可以参考以下步骤: - -* 在 “开始” 节点内添加文件变量; -* 在 LLM 节点上游添加文档提取器节点,将文件变量作为输入变量; -* 将文档提取器节点的 **输出变量** `text` 填写至 LLM 节点中的提示词内。 - -如需了解更多,请参考[文件上传](/zh-cn/user-guide/build-app/flow-app/file-upload)。 - - - 填写系统提示词 - diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/parameter-extractor.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/parameter-extractor.mdx deleted file mode 100644 index 37965e6a..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/parameter-extractor.mdx +++ /dev/null @@ -1,73 +0,0 @@ ---- -title: 参数提取 -version: '简体中文' ---- - -### 定义 - -利用 LLM 从自然语言推理并提取结构化参数,用于后置的工具调用或 HTTP 请求。 - -Dify 工作流内提供了丰富的[工具](/zh-cn/user-guide/tools/introduction)选择,其中大多数工具的输入为结构化参数,参数提取器可以将用户的自然语言转换为工具可识别的参数,方便工具调用。 - -工作流内的部分节点有特定的数据格式传入要求,如[迭代](./iteration)节点的输入要求为数组格式,参数提取器可以方便的实现结构化参数的转换。 - -*** - -### 场景 - -1. **从自然语言中提供工具所需的关键参数提取**,如构建一个简单的对话式 Arxiv 论文检索应用。 - -在该示例中:Arxiv 论文检索工具的输入参数要求为 **论文作者** 或 **论文编号**,参数提取器从问题"这篇论文中讲了什么内容:2405.10739"中提取出论文编号 **2405.10739**,并作为工具参数进行精确查询。 - - - Arxiv 论文检索工具流程图 - - -2. **将文本转换为结构化数据**,如长故事迭代生成应用中,作为[迭代节点](iteration.md)的前置步骤,将文本格式的章节内容转换为数组格式,方便迭代节点进行多轮生成处理。 - - - 长故事迭代生成应用流程图 - - -3. **提取结构化数据并使用** [**HTTP 请求**](./http-request) ,可请求任意可访问的 URL ,适用于获取外部检索结果、webhook、生成图片等情景。 - -*** - -### 3 如何配置 - - - 参数提取配置界面 - - -**配置步骤** - -1. 选择输入变量,一般为用于提取参数的变量输入。输入变量支持 file -2. 选择模型,参数提取器的提取依靠的是 LLM 的推理和结构化生成能力 -3. 定义提取参数,可以手动添加需要提取的参数,也可以**从已有工具中快捷导入** -4. 编写指令,在提取复杂的参数时,编写示例可以帮助 LLM 提升生成的效果和稳定性 - -**高级设置** - -**推理模式** - -部分模型同时支持两种推理模式,通过函数/工具调用或是纯提示词的方式实现参数提取,在指令遵循能力上有所差别。例如某些模型在函数调用效果欠佳的情况下可以切换成提示词推理。 - -* Function Call/Tool Call -* Prompt - -**记忆** - -开启记忆后问题分类器的每次输入将包含对话中的聊天历史,以帮助 LLM 理解上文,提高对话交互中的问题理解能力。 - -**图片** - -开启图片 - -**输出变量** - -* 提取定义的变量 -* 节点内置变量 - -`__is_success Number 提取是否成功` 成功时值为 1,失败时值为 0。 - -`__reason String` 提取错误原因 \ No newline at end of file diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/question-classifier.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/question-classifier.mdx deleted file mode 100644 index b354bbb6..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/question-classifier.mdx +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: 问题分类 -version: '简体中文' ---- - -### **定义** - -通过定义分类描述,问题分类器能够根据用户输入,使用 LLM 推理与之相匹配的分类并输出分类结果,向下游节点提供更加精确的信息。 - -*** - -### **场景** - -常见的使用情景包括**客服对话意图分类、产品评价分类、邮件批量分类**等。 - -在一个典型的产品客服问答场景中,问题分类器可以作为知识库检索的前置步骤,对用户输入问题意图进行分类处理,分类后导向下游不同的知识库查询相关的内容,以精确回复用户的问题。 - -下图为产品客服场景的示例工作流模板: - - - 产品客服场景的示例工作流模板 - - -在该场景中我们设置了 3 个分类标签/描述: - -* 分类 1 :**与售后相关的问题** -* 分类 2:**与产品操作使用相关的问题** -* 分类 3 :**其他问题** - -当用户输入不同的问题时,问题分类器会根据已设置的分类标签 / 描述自动完成分类: - -* "**iPhone 14 如何设置通讯录联系人?**" —> "**与产品操作使用相关的问题**" -* "**保修期限是多久?**" —> "**与售后相关的问题**" -* "**今天天气怎么样?**" —> "**其他问题**" - -*** - -### 如何配置 - - - 问题分类器配置界面 - - -**配置步骤:** - -1. **选择输入变量**,指用于分类的输入内容,支持输入[文件变量](broken-reference)。客服问答场景下一般为用户输入的问题 `sys.query`; -2. **选择推理模型**,问题分类器基于大语言模型的自然语言分类和推理能力,选择合适的模型将有助于提升分类效果; -3. **编写分类标签/描述**,你可以手动添加多个分类,通过编写分类的关键词或者描述语句,让大语言模型更好的理解分类依据。 -4. **选择分类对应的下游节点,** 问题分类节点完成分类之后,可以根据分类与下游节点的关系选择后续的流程路径。 - -#### **高级设置:** - -**指令:** 你可以在 **高级设置-指令** 里补充附加指令,比如更丰富的分类依据,以增强问题分类器的分类能力。 - -**记忆:** 开启记忆后问题分类器的每次输入将包含对话中的聊天历史,以帮助 LLM 理解上文,提高对话交互中的问题理解能力。 - -**图片分析:** 仅适用于具备图片识别能力的 LLM,允许输入图片变量。 - -**记忆窗口:** 记忆窗口关闭时,系统会根据模型上下文窗口动态过滤聊天历史的传递数量;打开时用户可以精确控制聊天历史的传递数量(对数)。 - -**输出变量:** - -`class_name` - -即分类之后输出的分类名。你可以在下游节点需要时使用分类结果变量。 \ No newline at end of file diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/start.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/start.mdx deleted file mode 100644 index b26ad423..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/start.mdx +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: 开始 -version: '简体中文' ---- - -## 定义 - -**"开始"** 节点是每个工作流应用(Chatflow / Workflow)必备的预设节点,为后续工作流节点以及应用的正常流转提供必要的初始信息,例如应用使用者所输入的内容、以及[上传的文件](../file-upload)等。 - -### 配置节点 - -在开始节点的设置页,你可以看到两部分设置,分别是 **"输入字段"** 和预设的[**系统变量**](../variables.md#xi-tong-bian-liang)。 - -![](/zh-cn/img/ddca497c976020d9b8466ae1db553fd0.png) - -### 输入字段 - -输入字段功能由应用开发者设置,通常用于让应用使用者主动补全更多信息。例如在周报应用中要求使用者按照格式预先提供更多背景信息,如姓名、工作日期区间、工作详情等。这些前置信息将有助于 LLM 生成质量更高的答复。 - -支持以下六种类型输入变量,所有变量均可设置为必填项: - -* **文本**:短文本,由应用使用者自行填写内容,最大长度 256 字符。 -* **段落**:长文本,允许应用使用者输入较长字符。 -* **下拉选项**:由应用开发者固定选项,应用使用者仅能选择预设选项,无法自行填写内容。 -* **数字**:仅允许用户输入数字。 -* **单文件**:允许应用使用者单独上传文件,支持文档类型文件、图片、音频、视频和其它文件类型。支持通过本地上传文件或粘贴文件 URL。详细用法请参考[文件上传](../file-upload.md)。 -* **文件列表**:允许应用使用者批量上传文件,支持文档类型文件、图片、音频、视频和其它文件类型。支持通过本地上传文件或粘贴文件 URL。详细用法请参考[文件上传](../file-upload.md)。 - -> Dify 内置的文档提取器节点只能够处理部分格式的文档文件。如需处理图片、音频或视频类型文件,请参考[外部数据工具](../../extension/api-based-extension/external-data-tool.md)搭建对应文件的处理节点。 - -配置完成后,用户在使用应用前将按照输入项指引,向 LLM 提供必要信息。更多的信息将有助于 LLM 提升问答效率。 - -![](/zh-cn/img/b6ead0b90a97e0a79075a57798b883cc.png) - -### 系统变量 - -系统变量指的是在 Chatflow / Workflow 应用内预设的系统级参数,可以被应用内的其它节点全局读取。通常用于进阶开发场景,例如搭建多轮次对话应用、收集应用日志与监控、记录不同应用和用户的使用行为等。 - -**Workflow** - -Workflow 类型应用提供以下系统变量: - -| 变量名称 | 数据类型 | 说明 | 备注 | -|---------|--------|------|------| -| `sys.files` [LEGACY] | Array[File] | 文件参数,存储用户初始使用应用时上传的图片 | 图片上传功能需在应用编排页右上角的 "功能" 处开启 | -| `sys.user_id` | String | 用户 ID,每个用户在使用工作流应用时,系统会自动向用户分配唯一标识符,用以区分不同的对话用户 | | -| `sys.app_id` | String | 应用 ID,系统会向每个 Workflow 应用分配一个唯一的标识符,用以区分不同的应用,并通过此参数记录当前应用的基本信息 | 面向具备开发能力的用户,通过此参数区分并定位不同的 Workflow 应用 | -| `sys.workflow_id` | String | Workflow ID,用于记录当前 Workflow 应用内所包含的所有节点信息 | 面向具备开发能力的用户,可以通过此参数追踪并记录 Workflow 内的包含节点信息 | -| `sys.workflow_run_id` | String | Workflow 应用运行 ID,用于记录 Workflow 应用中的运行情况 | 面向具备开发能力的用户,可以通过此参数追踪应用的历次运行情况 | - -![](/zh-cn/img/cb39be409f0037549d45f4b7d05aa9ce.png) - -**Chatflow** - -Chatflow 类型应用提供以下系统变量: - -| 变量名称 | 数据类型 | 说明 | 备注 | -|---------|--------|------|------| -| `sys.query` | String | 用户在对话框中初始输入的内容 | | -| `sys.files` | Array[File] | 用户在对话框内上传的图片 | 图片上传功能需在应用编排页右上角的 "功能" 处开启 | -| `sys.dialogue_count` | Number | 用户在与 Chatflow 类型应用交互时的对话轮数。每轮对话后自动计数增加 1,可以和 if-else 节点搭配出丰富的分支逻辑。例如到第 X 轮对话时,回顾历史对话并给出分析 | | -| `sys.conversation_id` | String | 对话框交互会话的唯一标识符,将所有相关的消息分组到同一个对话中,确保 LLM 针对同一个主题和上下文持续对话 | | -| `sys.user_id` | String | 分配给每个应用用户的唯一标识符,用以区分不同的对话用户 | | -| `sys.app_id` | String | 应用 ID,系统会向每个 Workflow 应用分配一个唯一的标识符,用以区分不同的应用,并通过此参数记录当前应用的基本信息 | 面向具备开发能力的用户,通过此参数区分并定位不同的 Workflow 应用 | -| `sys.workflow_id` | String | Workflow ID,用于记录当前 Workflow 应用内所包含的所有节点信息 | 面向具备开发能力的用户,可以通过此参数追踪并记录 Workflow 内的包含节点信息 | -| `sys.workflow_run_id` | String | Workflow 应用运行 ID,用于记录 Workflow 应用中的运行情况 | 面向具备开发能力的用户,可以通过此参数追踪应用的历次运行情况 | - -![](/zh-cn/img/233efef6802ae700489f3ab3478bca6b.png) \ No newline at end of file diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/template.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/template.mdx deleted file mode 100644 index 380d0a25..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/template.mdx +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: 模板转换 -version: '简体中文' ---- - -### 定义 - -允许借助 Jinja2 的 Python 模板语言灵活地进行数据转换、文本处理等。 - -### 什么是 Jinja? - -> Jinja is a fast, expressive, extensible templating engine. -> -> Jinja 是一个快速、表达力强、可扩展的模板引擎。 - -—— [https://jinja.palletsprojects.com/en/3.1.x/](https://jinja.palletsprojects.com/en/3.1.x/) - -### 场景 - -模板节点允许你借助 Jinja2 这一强大的 Python 模板语言,在工作流内实现轻量、灵活的数据转换,适用于文本处理、JSON 转换等情景。例如灵活地格式化并合并来自前面步骤的变量,创建出单一的文本输出。这非常适合于将多个数据源的信息汇总成一个特定格式,满足后续步骤的需求。 - -**示例1:** 将多个输入(文章标题、介绍、内容)拼接为完整文本 - - - 拼接文本示例 - - -**示例2:** 将知识检索节点获取的信息及其相关的元数据,整理成一个结构化的 Markdown 格式 - -```Plain -{% for item in chunks %} -### Chunk {{ loop.index }}. -### Similarity: {{ item.metadata.score | default('N/A') }} - -#### {{ item.title }} - -##### Content -{{ item.content | replace('\n', '\n\n') }} - ---- -{% endfor %} -``` - - - 知识检索节点输出转换为 Markdown示例 - - -你可以参考 Jinja 的[官方文档](https://jinja.palletsprojects.com/en/3.1.x/templates/),创建更为复杂的模板来执行各种任务。 \ No newline at end of file diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/tools.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/tools.mdx deleted file mode 100644 index 280e68f5..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/tools.mdx +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: 工具 -version: '简体中文' ---- - -“工具”节点可以为工作流提供强大的第三方能力支持,分为以下三种类型: - -* **内置工具**,Dify 第一方提供的工具,使用该工具前可能需要先给工具进行 **授权**。 -* **自定义工具**,通过 [OpenAPI/Swagger 标准格式](https://swagger.io/specification/)导入或配置的工具。如果内置工具无法满足使用需求,你可以在 **Dify 菜单导航 --工具** 内创建自定义工具。 -* **工作流**,你可以编排一个更复杂的工作流,并将其发布为工具。详细说明请参考[工具配置说明](/zh-cn/user-guide/tools/introduction)。 - -### 添加工具节点 - -添加节点时,选择右侧的 “工具” tab 页。配置工具节点一般分为两个步骤: - -1. 对工具授权/创建自定义工具/将工作流发布为工具 -2. 配置工具输入和参数 - - - - - -工具节点可以连接其它节点,通过[变量](/zh-cn/user-guide/build-app/flow-app/variables)处理和传递数据。 - - - - - -### 将工作流应用发布为工具 - -工作流应用可以被发布为工具,并被其它工作流内的节点所应用。关于如何创建自定义工具和配置工具,请参考[工具配置说明](/zh-cn/user-guide/tools/introduction)。 diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/variable-aggregation.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/variable-aggregation.mdx deleted file mode 100644 index 0be9e8f2..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/variable-aggregation.mdx +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: 变量聚合 -version: '简体中文' ---- - -# 变量聚合 - -### 定义 - -将多路分支的变量聚合为一个变量,以实现下游节点统一配置。 - -变量聚合节点(原变量赋值节点)是工作流程中的一个关键节点,它负责整合不同分支的输出结果,确保无论哪个分支被执行,其结果都能通过一个统一的变量来引用和访问。这在多分支的情况下非常有用,可将不同分支下相同作用的变量映射为一个输出变量,避免下游节点重复定义。 - -*** - -### 场景 - -通过变量聚合,可以将诸如问题分类或条件分支等多路输出聚合为单路,供流程下游的节点使用和操作,简化了数据流的管理。 - -**问题分类后的多路聚合** - -未添加变量聚合,分类1 和 分类 2 分支经不同的知识库检索后需要重复定义下游的 LLM 和直接回复节点。 - - - 问题分类无变量聚合的流程图 - - -添加变量聚合,可以将两个知识检索节点的输出聚合为一个变量。 - - - 问题分类后添加变量聚合的流程图 - - -**IF/ELSE 条件分支后的多路聚合** - - - IF/ELSE 条件分支后添加变量聚合的流程图 - - -### 格式要求 - -变量聚合器支持聚合多种数据类型,包括字符串(`String`)、数字(`Number`)、文件(`File`)对象(`Object`)以及数组(`Array`) - -**变量聚合器只能聚合同一种数据类型的变量**。若第一个添加至变量聚合节点内的变量数据格式为 `String`,后续连线时会自动过滤可添加变量为 `String` 类型。 - -**聚合分组** - -开启聚合分组后,变量聚合器可以聚合多组变量,各组内聚合时要求同一种数据类型。 \ No newline at end of file diff --git a/zh-hans/user-guide/build-app/flow-app/nodes/variable-assigner.mdx b/zh-hans/user-guide/build-app/flow-app/nodes/variable-assigner.mdx deleted file mode 100644 index fbdb04e0..00000000 --- a/zh-hans/user-guide/build-app/flow-app/nodes/variable-assigner.mdx +++ /dev/null @@ -1,167 +0,0 @@ ---- -title: 变量赋值 -version: '简体中文' ---- - -### 定义 - -变量赋值节点用于向可写入变量进行变量赋值,已支持以下可写入变量: - -* [会话变量](../key-concept.md#hui-hua-bian-liang)。 - -用法:通过变量赋值节点,你可以将工作流内的变量赋值到会话变量中用于临时存储,并可以在后续对话中持续引用。 - - - 会话变量示例图 - - -*** - -### 使用场景示例 - -通过变量赋值节点,你可以将会话过程中的**上下文、上传至对话框的文件(即将上线)、用户所输入的偏好信息**等写入至会话变量,并在后续对话中引用已存储的信息导向不同的处理流程或者进行回复。 - -**场景 1** - -**自动判断提取并存储对话中的信息**,在会话内通过会话变量数组记录用户输入的重要信息,并在后续对话中让 LLM 基于会话变量中存储的历史信息进行个性化回复。 - -示例:开始对话后,LLM 会自动判断用户输入是否包含需要记住的事实、偏好或历史记录。如果有,LLM 会先提取并存储这些信息,然后再用这些信息作为上下文来回答。如果没有新的信息需要保存,LLM 会直接使用自身的相关记忆知识来回答问题。 - - - 自动判断提取并存储对话中的信息流程图 - - -**配置流程:** - -1. **设置会话变量**:首先设置一个会话变量数组 `memories`,类型为 array\[object],用于存储用户的事实、偏好和历史记录。 -2. **判断和提取记忆**: - * 添加一个条件判断节点,使用 LLM 来判断用户输入是否包含需要记住的新信息。 - * 如果有新信息,走上分支,使用 LLM 节点提取这些信息。 - * 如果没有新信息,走下分支,直接使用现有记忆回答。 -3. **变量赋值/写入**: - * 在上分支中,使用变量赋值节点,将提取出的新信息追加(append)到 `memories` 数组中。 - * 使用转义功能将 LLM 输出的文本字符串转换为适合存储在 array\[object] 中的格式。 -4. **变量读取和使用**: - * 在后续的 LLM 节点中,将 `memories` 数组中的内容转换为字符串,并插入到 LLM 的提示词 Prompts 中作为上下文。 - * LLM 使用这些历史信息来生成个性化回复。 - -图中的 code 节点代码如下: - -1. 将字符串转义为 object - -```python -import json - -def main(arg1: str) -> object: - try: - # Parse the input JSON string - input_data = json.loads(arg1) - - # Extract the memory object - memory = input_data.get("memory", {}) - - # Construct the return object - result = { - "facts": memory.get("facts", []), - "preferences": memory.get("preferences", []), - "memories": memory.get("memories", []) - } - - return { - "mem": result - } - except json.JSONDecodeError: - return { - "result": "Error: Invalid JSON string" - } - except Exception as e: - return { - "result": f"Error: {str(e)}" - } -``` - -2. 将 object 转义为字符串 - -```python -import json - -def main(arg1: list) -> str: - try: - # Assume arg1[0] is the dictionary we need to process - context = arg1[0] if arg1 else {} - - # Construct the memory object - memory = {"memory": context} - - # Convert the object to a JSON string - json_str = json.dumps(memory, ensure_ascii=False, indent=2) - - # Wrap the JSON string in tags - result = f"{json_str}" - - return { - "result": result - } - except Exception as e: - return { - "result": f"Error: {str(e)}" - } -``` - -**场景 2** - -**记录用户的初始偏好信息**,在会话内记住用户输入的语言偏好,在后续对话中持续使用该语言类型进行回复。 - -示例:用户在对话开始前,在 `language` 输入框内指定了 "中文",该语言将会被写入会话变量,LLM 在后续进行答复时会参考会话变量中的信息,在后续对话中持续使用"中文"进行回复。 - - - 记录用户的初始偏好信息流程图 - - -**配置流程:** - -**设置会话变量**:首先设置一个会话变量 `language`,在会话流程开始时添加一个条件判断节点,用来判断 `language` 变量的值是否为空。 - -**变量写入/赋值**:首轮对话开始时,若 `language` 变量值为空,则使用 LLM 节点来提取用户输入的语言,再通过变量赋值节点将该语言类型写入到会话变量 `language` 中。 - -**变量读取**:在后续对话轮次中 `language` 变量已存储用户语言偏好。在后续对话中,LLM 节点通过引用 language 变量,使用用户的偏好语言类型进行回复。 - -**场景 3** - -**辅助 Checklist 检查**,在会话内通过会话变量记录用户的输入项,更新 Checklist 中的内容,并在后续对话中检查遗漏项。 - -示例:开始对话后,LLM 会要求用户在对话框内输入 Checklist 所涉及的事项,用户一旦提及了 Checklist 中的内容,将会更新并存储至会话变量内。LLM 会在每轮对话后提醒用户继续补充遗漏项。 - - - 辅助 Checklist 检查流程图 - - -**配置流程:** - -* **设置会话变量:** 首先设置一个会话变量 `ai_checklist`,在 LLM 内引用该变量作为上下文进行检查。 -* **变量赋值/写入:** 每一轮对话时,在 LLM 节点内检查 `ai_checklist` 内的值并比对用户输入,若用户提供了新的信息,则更新 Checklist 并将输出内容通过变量赋值节点写入到 `ai_checklist` 内。 -* **变量读取:** 每一轮对话读取 `ai_cheklist` 内的值并比对用户输入直至所有 checklist 完成。 - -*** - -### 使用变量赋值节点 - -点击节点右侧 + 号,选择"变量赋值"节点,填写"赋值的变量"和"设置变量"。 - - - 变量赋值节点设置界面 - - -**设置变量:** - -赋值的变量:选择被赋值变量,即指定需要被赋值的目标会话变量。 - -设置变量:选择需要赋值的变量,即指定需要被转换的源变量。 - -以上图赋值逻辑为例:将上一个节点的文本输出项 `Language Recognition/text` 赋值到会话变量 `language` 内。 - -**写入模式:** - -* 覆盖,将源变量的内容覆盖至目标会话变量 -* 追加,指定变量为 Array 类型时 -* 清空,清空目标会话变量中的内容 \ No newline at end of file diff --git a/zh-hans/user-guide/build-app/flow-app/orchestrate-node.mdx b/zh-hans/user-guide/build-app/flow-app/orchestrate-node.mdx deleted file mode 100644 index 7f839e1b..00000000 --- a/zh-hans/user-guide/build-app/flow-app/orchestrate-node.mdx +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: 编排节点 -version: '简体中文' ---- - -Chatflow 和 Workflow 类型应用内的节点均可以通过可视化拖拉拽的形式进行编排,支持**串行**和**并行**两种编排设计模式。 - - - 串行和并行节点流对比图 - - -## 串行设计 - -该结构要求节点按照预设顺序依次执行,每个节点需等待前一个节点完成并输出结果后才能开始工作,有助于**确保任务按照逻辑顺序执行。** - -例如,在一个采用串行结构设计的"小说生成" AI 应用内,用户输入小说风格、节奏和角色后,LLM 按照顺序补全小说大纲、小说剧情和结尾;每个节点都基于前一个节点的输出结果展开工作,确保小说的风格一致性。 - -### 设计串行结构 - -点击两个节点中间连线的 + 号即可在中间添加一个串行节点;按照顺序将节点依次串线连接,最后将线收拢至\*\*"结束"节点\*\*(Workflow) **"直接回复"节点** (Chatflow)完成设计。 - - - 串行结构设计示意图 - - -### 查看串行结构应用日志 - -串行结构应用内的日志将按照顺序展示各个节点的运行情况。点击对话框右上角的 「查看日志-追踪」,查看工作流完整运行过程各节点的输入 / 输出、Token 消耗、运行时长等。 - - - 串行结构应用日志界面 - - -## 并行设计 - -该设计模式允许多个节点在同一时间内共同执行,前置节点可以同时触发位于并行结构内的多个节点。并行结构内的节点不存在依赖关系,能够同时执行任务,更好地提升**节点的任务执行效率。** - -例如,在某个并行设计的翻译工作流应用内,用户输入源文本触发工作流后,位于并行结构内的节点将共同收到前置节点的流转指令,同时开展多语言的翻译任务,缩短任务的处理耗时。 - - - 并行设计示意图 - - -### 新建并行结构 - -你可以参考以下四种方式,通过新建节点或拖拽的方式创建并行结构。 - -**方式 1** - -将鼠标 Hover 至某个节点,显示 `+` 按钮,支持新建多个节点,创建后自动形成并行结构。 - - - 新建并行结构方式1 - - -**方式 2** - -拖拽节点末尾的 `+` 按钮,拉出连线形成并行结构。 - - - 新建并行结构方式2 - - -**方式 3** - -如果画布存在多个节点,通过可视化拖拽的方式组成并行结构。 - - - 新建并行结构方式3 - - -**方式 4** - -除了在画布中通过直接添加并行节点或可视化拖拽方式组成并行结构,你也可以在节点右侧清单的"下一步"中添加并行节点,自动生成并行结构。 - - - 新建并行结构方式4 - - -**Tips:** - -* 画布上的"线"可以被删除; -* 并行结构的下游节点可以是任意节点; -* 在 Workflow 类型应用内需确定唯一的 "end" 节点; -* Chatflow 类型应用支持添加多个 **"直接回复"** 节点,该类型应用内的所有并行结构在末尾处均需要配置 **"直接回复"** 节点才能正常输出各个并行结构里的内容; -* 所有的并行结构都会同时运行;并行结构内的节点处理完任务后即输出结果,**输出结果时不存在顺序关系**。并行结构越简单,输出结果的速度越快。 - - - Chatflow 应用中的并行结构示例 - - -### 设计并行结构应用 - -下文将展示四种常见的并行节点设计思路。 - -1. **普通并行** - -普通并行指的是 `开始 | 并行结构 | 结束` 三层关系也是并行结构的最小单元。这种结构较为直观,用户输入内容后,工作流能同时执行多条任务。 - -> 并行分支的上限数为 10 个。 - - - 普通并行结构示例 - - -2. **嵌套并行** - -嵌套并行指的是 `开始 | 多个并行结构 | 结束`多层关系,它适用于内部较为复杂的工作流,例如需要在某个节点内请求外部 API,将返回的结果同时交给下游节点处理。 - -一个工作流内最多支持 3 层嵌套关系。 - - - 嵌套并行结构示例 - - -3. **条件分支 + 并行** - -并行结构也可以和条件分支共同使用。 - - - 条件分支和并行结构结合示例 - - -4. **迭代分支 + 并行** - -迭代分支内同样支持编排并行结构,加速迭代内各节点的执行效率。 - - - 迭代分支和并行结构结合示例 - - -### 查看并行结构应用日志 - -包含并行结构的应用的运行日志支持以树状结构进行展示,你可以折叠并行节点组以更好地查看各个节点的运行日志。 - - - 并行结构应用日志界面 - \ No newline at end of file diff --git a/zh-hans/user-guide/build-app/flow-app/variables.mdx b/zh-hans/user-guide/build-app/flow-app/variables.mdx deleted file mode 100644 index 6d3020b5..00000000 --- a/zh-hans/user-guide/build-app/flow-app/variables.mdx +++ /dev/null @@ -1,178 +0,0 @@ ---- -title: 变量 -version: '简体中文' ---- - -Workflow 和 Chatflow 类型应用由独立节点相构成。大部分节点设有输入和输出项,但每个节点的输入信息不一致,各个节点所输出的答复也不尽相同。 - -如何用一种固定的符号**指代动态变化的内容?** 变量作为一种动态数据容器,能够存储和传递不固定的内容,在不同的节点内被相互引用,实现信息在节点间的灵活通信。 - -### **系统变量** - -系统变量指的是在 Chatflow / Workflow 应用内预设的系统级参数,可以被其它节点全局读取。系统级变量均以 `sys` 开头。 - -#### Workflow - -Workflow 类型应用提供以下系统变量: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
变量名称数据类型说明备注

sys.files

[LEGACY]

Array[File]文件参数,存储用户初始使用应用时上传的图片图片上传功能需在应用编排页右上角的 "功能" 处开启
sys.user_idString用户 ID,每个用户在使用工作流应用时,系统会自动向用户分配唯一标识符,用以区分不同的对话用户
sys.app_idString应用 ID,系统会向每个 Workflow 应用分配一个唯一的标识符,用以区分不同的应用,并通过此参数记录当前应用的基本信息面向具备开发能力的用户,通过此参数区分并定位不同的 Workflow 应用
sys.workflow_idStringWorkflow ID,用于记录当前 Workflow 应用内所包含的所有节点信息面向具备开发能力的用户,可以通过此参数追踪并记录 Workflow 内的包含节点信息
sys.workflow_run_idStringWorkflow 应用运行 ID,用于记录 Workflow 应用中的运行情况面向具备开发能力的用户,可以通过此参数追踪应用的历次运行情况
- -![](/zh-cn/img/c405efa31fd5708542fdc3bd7c0cb708.png) - -#### Chatflow - -Chatflow 类型应用提供以下系统变量: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
变量名称数据类型说明备注
sys.queryString用户在对话框中初始输入的内容
sys.filesArray[File]用户在对话框内上传的图片图片上传功能需在应用编排页右上角的 "功能" 处开启
sys.dialogue_countNumber

用户在与 Chatflow 类型应用交互时的对话轮数。每轮对话后自动计数增加 1,可以和 if-else 节点搭配出丰富的分支逻辑。

例如到第 X 轮对话时,回顾历史对话并给出分析

sys.conversation_idString对话框交互会话的唯一标识符,将所有相关的消息分组到同一个对话中,确保 LLM 针对同一个主题和上下文持续对话
sys.user_idString分配给每个应用用户的唯一标识符,用以区分不同的对话用户
sys.app_idString应用 ID,系统会向每个 Workflow 应用分配一个唯一的标识符,用以区分不同的应用,并通过此参数记录当前应用的基本信息面向具备开发能力的用户,通过此参数区分并定位不同的 Workflow 应用
sys.workflow_idStringWorkflow ID,用于记录当前 Workflow 应用内所包含的所有节点信息面向具备开发能力的用户,可以通过此参数追踪并记录 Workflow 内的包含节点信息
sys.workflow_run_idStringWorkflow 应用运行 ID,用于记录 Workflow 应用中的运行情况面向具备开发能力的用户,可以通过此参数追踪应用的历次运行情况
- -![](/zh-cn/img/e387366fe2643688d57e6b9a69eacb1b.png) - -### 环境变量 - -**环境变量用于保护工作流内所涉及的敏感信息**,例如运行工作流时所涉及的 API 密钥、数据库密码等。它们被存储在工作流程中,而不是代码中,以便在不同环境中共享。 - -![](/zh-cn/img/d27ebecdc87e630212fb0ed866bf8e9e.png) - -支持以下三种数据类型: - -* String 字符串 -* Number 数字 -* Secret 密钥 - -环境变量拥有以下特性: - -* 环境变量可在大部分节点内全局引用; -* 环境变量命名不可重复; -* 环境变量为只读变量,不可写入; - -### 会话变量 - -> 会话变量面向多轮对话场景,而 Workflow 类型应用的交互是线性而独立的,不存在多次对话交互的情况,因此会话变量仅适用于 Chatflow 类型(聊天助手 → 工作流编排)应用。 - -**会话变量允许应用开发者在同一个 Chatflow 会话内,指定需要被临时存储的特定信息,并确保在当前工作流内的多轮对话内都能够引用该信息**,如上下文、上传至对话框的文件(即将上线)、 用户在对话过程中所输入的偏好信息等。好比为 LLM 提供一个可以被随时查看的"备忘录",避免因 LLM 记忆出错而导致的信息偏差。 - -例如你可以将用户在首轮对话时输入的语言偏好存储至会话变量中,LLM 在回答时将参考会话变量中的信息,并在后续的对话中使用指定的语言回复用户。 - -![](/zh-cn/img/338f93f401142fea4936a67a615eba32.png) - -**会话变量**支持以下六种数据类型: - -* String 字符串 -* Number 数值 -* Object 对象 -* Array\[string] 字符串数组 -* Array\[number] 数值数组 -* Array\[object] 对象数组 - -**会话变量**具有以下特性: - -* 会话变量可在大部分节点内全局引用; -* 会话变量的写入需要使用[变量赋值](./nodes/variable-assigner)节点; -* 会话变量为可读写变量; - -关于如何将会话变量与变量赋值节点配合使用,请参考[变量赋值](./nodes/variable-assigner)节点说明。 - -### 注意事项 - -* 为避免变量名重复,节点命名不可重复 -* 节点的输出变量一般为固定变量,不可编辑 \ No newline at end of file diff --git a/zh-hans/user-guide/build-app/text-generator.mdx b/zh-hans/user-guide/build-app/text-generator.mdx deleted file mode 100644 index 8189cdfc..00000000 --- a/zh-hans/user-guide/build-app/text-generator.mdx +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: 文本生成应用 -version: '简体中文' ---- - -文本生成应用是一种专门产出特定内容格式的应用类型,这类应用允许用户输入具体需求或参数,随后自动生成符合预设格式的文本输出。 - -与提供持续对话能力的聊天助手不同,文本生成应用处理单次输入并生成结果,提供一次性的内容生成服务,例如 Midjourney Prompt Generator 或其它固定格式内容的生成器。 - -## 适用场景 - -文本生成应用适合于需要快速、批量生成标准化内容的场景,如报告撰写、模板化内容批处理生成等领域。 - -## 创建与编排应用 - -下文以一个 **周报生成器** 应用为例来介绍编排对话型应用。 - -### 1. 创建应用 - -在首页点击 "创建应用" 按钮创建应用。填上应用名称,应用类型选择**文本生成应用**。 - -![创建文本生成应用](https://assets-docs.dify.ai/2025/02/e18899de028f6faddcbf3e1d8ef94efb.png) - -### 2. 选择模型供应商 - -聊天助手应用的底层驱动能力为 AI 大模型,不同的底层模型将影响问答质量,因此需要先在[模型供应商](/zh-cn/user-guide/models/model-configuration)内配置所需的 API Key。 - -### 3. 编排应用 - -创建应用后会自动跳转到应用概览页,你可以在此处为聊天应用设置变量、添加上下文以及额外的聊天功能。 - -![](/zh-cn/img/ed7368f117afa02ca5359472ea1167e8.png) - -**填写提示词** - -提示词用于约束 AI 给出专业的回复,让回应更加精确。你可以借助内置的提示生成器,编写合适的提示词。提示词内支持插入表单变量,例如 `{{input}}`。提示词中的变量的值会替换成用户填写的值。 - -示例: -1. 输入提示指令,要求给出一段周报编写场景的提示词。 -2. 右侧内容框将自动生成提示词。 -3. 你可以在提示词内插入自定义变量。 - -![](/zh-cn/img/592adde305ffe7a9b2f538b20a14483e.png) - -### 添加上下文 - -如果想要让 AI 基于知识库中的内容生成文本,例如生成符合企业内部标准的内容,,可以在应用的“上下文"内引用知识库。 - -![](https://assets-docs.dify.ai/2025/02/838312f6b88927a08c9f113c9a557608.png) - -## 调试应用 - -在右侧填写用户输入项,输入内容进行调试。 - -![](/zh-cn/img/302d73e248bc80a357e83ab3d65dcf32.png) - -如果回答结果不理想,可以调整提示词和底层模型。你也可以使用多个模型同步进行调试,搭配出合适的配置。 - -![](/zh-cn/img/241e9bd270b26a8b9658a784ecf9bf13.png) - -**多个模型进行调试:** - -如果使用单一模型调试时感到效率低下,你也可以使用 **"多个模型进行调试"** 功能,批量检视模型的回答效果。 - -![](/zh-cn/img/5a6cff31c1fa94912ee39306739a2d6e.png) - -最多支持同时添加 4 个大模型。 - -![](/zh-cn/img/60109394134b665cb856505503f4d975.png) - -⚠️ 使用多模型调试功能时,如果仅看到部分大模型,这是因为暂未添加其它大模型的 Key。你可以在"增加新供应商"内手动添加多个模型的 Key。 - -### 应用扩展功能 - -若希望提升用户在使用应用时的体验,可以为应用添加额外的扩展功能,例如对话开场白、文件上传等功能。 - -## 发布应用 - -调试好应用后,点击右上角的 **"发布"** 按钮生成独立的 AI 应用。除了通过公开 URL 体验该应用,你也进行基于 APIs 的二次开发、嵌入至网站内等操作。详情请参考[发布应用](/zh-cn/user-guide/application-publishing/launch-your-webapp-quickly/web-app-settings)。 - -如果想定制已发布的应用,可以 Fork 我们的开源的 WebApp 的模版。基于模版改成符合你的情景与风格需求的应用。 - -## 常见问题 - -**如何在文本生成器内添加自定义工具?** - -文本生成器类型应用不支持添加第三方工具,你可以在 Agent 类型应用内添加自定义工具。 diff --git a/zh-hans/user-guide/knowledge-base/create-knowledge-and-upload-documents/readme.mdx b/zh-hans/user-guide/knowledge-base/create-knowledge-and-upload-documents/readme.mdx index 9b5d3bf6..67c9e6ef 100644 --- a/zh-hans/user-guide/knowledge-base/create-knowledge-and-upload-documents/readme.mdx +++ b/zh-hans/user-guide/knowledge-base/create-knowledge-and-upload-documents/readme.mdx @@ -34,7 +34,7 @@ title: 知识库创建步骤 在 RAG 的生产级应用中,为了获得更好的数据召回效果,需要对多源数据进行预处理和清洗,即 ETL (_extract, transform, load_)。为了增强非结构化/半结构化数据的预处理能力,Dify 支持了可选的 ETL 方案:**Dify ETL** 和[ ](https://docs.unstructured.io/welcome)[**Unstructured ETL** ](https://unstructured.io/)。Unstructured 能够高效地提取并转换你的数据为干净的数据用于后续的步骤。Dify 各版本的 ETL 方案选择: * SaaS 版不可选,默认使用 Unstructured ETL; -* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; +* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; 文件解析支持格式的差异: diff --git a/zh-hans/user-guide/knowledge-base/knowledge-base-creation/introduction.mdx b/zh-hans/user-guide/knowledge-base/knowledge-base-creation/introduction.mdx index 1da90b2c..576eb7ef 100644 --- a/zh-hans/user-guide/knowledge-base/knowledge-base-creation/introduction.mdx +++ b/zh-hans/user-guide/knowledge-base/knowledge-base-creation/introduction.mdx @@ -36,7 +36,7 @@ title: 创建步骤 在 RAG 的生产级应用中,为了获得更好的数据召回效果,需要对多源数据进行预处理和清洗,即 ETL (_extract, transform, load_)。为了增强非结构化/半结构化数据的预处理能力,Dify 支持了可选的 ETL 方案:**Dify ETL** 和[ ](https://docs.unstructured.io/welcome)[**Unstructured ETL** ](https://unstructured.io/)。Unstructured 能够高效地提取并转换你的数据为干净的数据用于后续的步骤。Dify 各版本的 ETL 方案选择: * SaaS 版不可选,默认使用 Unstructured ETL; -* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](https://docs.dify.ai/v/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; +* 社区版可选,默认使用 Dify ETL ,可通过[环境变量](/zh-hans/getting-started/install-self-hosted/environments#zhi-shi-ku-pei-zhi)开启 Unstructured ETL; 文件解析支持格式的差异: