From 423a4ffe17ef0d2b5aad68cb99594d6a2add383c Mon Sep 17 00:00:00 2001 From: "mintlify[bot]" <109931778+mintlify[bot]@users.noreply.github.com> Date: Fri, 8 Aug 2025 21:04:56 +0000 Subject: [PATCH 1/6] Documentation edits made through Mintlify web editor --- en/guides/workflow/node/code.mdx | 65 +++- en/guides/workflow/node/llm.mdx | 387 +++++++++++-------- zh-hans/guides/workflow/node/code.mdx | 72 +++- zh-hans/guides/workflow/node/llm.mdx | 514 ++++++++++++++------------ 4 files changed, 627 insertions(+), 411 deletions(-) diff --git a/en/guides/workflow/node/code.mdx b/en/guides/workflow/node/code.mdx index 366bacb2..6c98ad39 100644 --- a/en/guides/workflow/node/code.mdx +++ b/en/guides/workflow/node/code.mdx @@ -1,14 +1,13 @@ --- -title: Code Execution +title: "Code Execution" --- - ## Table of Contents -* [Introduction](#introduction) -* [Usage Scenarios](#usage-scenarios) -* [Local Deployment](#local-deployment) -* [Security Policies](#security-policies) +- [Introduction](#introduction) +- [Usage Scenarios](#usage-scenarios) +- [Local Deployment](#local-deployment) +- [Security Policies](#security-policies) ## Introduction @@ -16,7 +15,10 @@ The code node supports running Python/NodeJS code to perform data transformation This node significantly enhances the flexibility for developers, allowing them to embed custom Python or JavaScript scripts within the workflow and manipulate variables in ways that preset nodes cannot achieve. Through configuration options, you can specify the required input and output variables and write the corresponding execution code: - + ## Configuration @@ -113,8 +115,8 @@ def main() -> dict: This code snippet has the following issues: -* **Unauthorized file access:** The code attempts to read the "/etc/passwd" file, which is a critical system file in Unix/Linux systems that stores user account information. -* **Sensitive information disclosure:** The "/etc/passwd" file contains important information about system users, such as usernames, user IDs, group IDs, home directory paths, etc. Direct access could lead to information leakage. +- **Unauthorized file access:** The code attempts to read the "/etc/passwd" file, which is a critical system file in Unix/Linux systems that stores user account information. +- **Sensitive information disclosure:** The "/etc/passwd" file contains important information about system users, such as usernames, user IDs, group IDs, home directory paths, etc. Direct access could lead to information leakage. Dangerous code will be automatically blocked by Cloudflare WAF. You can check if it's been blocked by looking at the "Network" tab in your browser's "Web Developer Tools". @@ -126,7 +128,48 @@ DO NOT edit this section! It will be automatically generated by the script. */} +The **Code Fix** feature enables **automatic code correction** by leveraging the previous run’s `current_code` and `error_message` variables. + +# When a Code Node fails: + +- The system captures the code and error message. +- These are passed into the prompt as context variables. +- A new version of the code is generated for review and retry. + +**Configuration:** + +1. **Write Repair Prompt**: + + In the prompt editor, use the variable insertion menu (`/` or `{`) to insert variables. You may customize a prompt like: + +`Fix the following code based on this error message: Code: {{current_code}} Error: {{error_message}}` + + + +2. **Using Context Variables (if needed later in workflow)** + +To enable automatic code repair, reference the following **context variables** in your prompt: + +- `{{current_code}}`: The code from the last run of this node. +- `{{error_message}}`: The error message if the last run failed. + +You can also reference output variables from any predecessor nodes. + +These variables are automatically available when the Code Node is run and allow the model to use prior run information for iterative correction. + +3. **Use Version Management**: + 1. Each correction attempt is saved as a separate version (e.g., Version 1, Version 2). + 2. Users can **switch between versions** via the dropdown in the result display area. + +**Notes:** + +- `error_message` is empty if the last run succeeds. +- `last_run` can be used to reference previous input/output. + +This reduces manual copy-paste and allows iterative debugging directly within the workflow. + --- -[Edit this page](https://github.com/langgenius/dify-docs/edit/main/en/guides/workflow/node/code.mdx) | [Report an issue](https://github.com/langgenius/dify-docs/issues/new?template=docs.yml) - +[Edit this page](https://github.com/langgenius/dify-docs/edit/main/en/guides/workflow/node/code.mdx) | [Report an issue](https://github.com/langgenius/dify-docs/issues/new?template=docs.yml) \ No newline at end of file diff --git a/en/guides/workflow/node/llm.mdx b/en/guides/workflow/node/llm.mdx index feca8427..6e492448 100644 --- a/en/guides/workflow/node/llm.mdx +++ b/en/guides/workflow/node/llm.mdx @@ -1,5 +1,5 @@ --- -title: LLM +title: "LLM" --- ### Definition @@ -8,24 +8,24 @@ Invokes the capabilities of large language models to process information input b ![LLM Node](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/85730fbfa1d441d12d969b89adf2670e.png) -*** +--- ### Scenarios LLM is the core node of Chatflow/Workflow, utilizing the conversational/generative/classification/processing capabilities of large language models to handle a wide range of tasks based on given prompts and can be used in different stages of workflows. -* **Intent Recognition**: In customer service scenarios, identifying and classifying user inquiries to guide downstream processes. -* **Text Generation**: In content creation scenarios, generating relevant text based on themes and keywords. -* **Content Classification**: In email batch processing scenarios, automatically categorizing emails, such as inquiries/complaints/spam. -* **Text Conversion**: In translation scenarios, translating user-provided text into a specified language. -* **Code Generation**: In programming assistance scenarios, generating specific business code or writing test cases based on user requirements. -* **RAG**: In knowledge base Q\&A scenarios, reorganizing retrieved relevant knowledge to respond to user questions. -* **Image Understanding**: Using multimodal models with vision capabilities to understand and answer questions about the information within images. -* **File Analysis**: In file processing scenarios, use LLMs to recognize and analyze the information contained within files. +- **Intent Recognition**: In customer service scenarios, identifying and classifying user inquiries to guide downstream processes. +- **Text Generation**: In content creation scenarios, generating relevant text based on themes and keywords. +- **Content Classification**: In email batch processing scenarios, automatically categorizing emails, such as inquiries/complaints/spam. +- **Text Conversion**: In translation scenarios, translating user-provided text into a specified language. +- **Code Generation**: In programming assistance scenarios, generating specific business code or writing test cases based on user requirements. +- **RAG**: In knowledge base Q&A scenarios, reorganizing retrieved relevant knowledge to respond to user questions. +- **Image Understanding**: Using multimodal models with vision capabilities to understand and answer questions about the information within images. +- **File Analysis**: In file processing scenarios, use LLMs to recognize and analyze the information contained within files. By selecting the appropriate model and writing prompts, you can build powerful and reliable solutions within Chatflow/Workflow. -*** +--- ### How to Configure @@ -39,7 +39,7 @@ By selecting the appropriate model and writing prompts, you can build powerful a 4. **Advanced Settings**: You can enable memory, set memory windows, and use the Jinja-2 template language for more complex prompts. -If you are using Dify for the first time, you need to complete the [model configuration](/en/guides/model-configuration) in **System Settings-Model Providers** before selecting a model in the LLM node. + If you are using Dify for the first time, you need to complete the [model configuration](/en/guides/model-configuration) in **System Settings-Model Providers** before selecting a model in the LLM node. #### **Writing Prompts** @@ -56,7 +56,7 @@ In the prompt editor, you can call out the **variable insertion menu** by typing ![Calling Out the Variable Insertion Menu](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/d8ed0160a7fba0a14dd823ef97610cc4.png) -*** +--- ### Explanation of Special Variables @@ -66,12 +66,12 @@ Context variables are a special type of variable defined within the LLM node, us ![Context Variables](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/5aefed96962bd994f8f05bac96b11e22.png) -In common knowledge base Q\&A applications, the downstream node of knowledge retrieval is typically the LLM node. The **output variable** `result` of knowledge retrieval needs to be configured in the **context variable** within the LLM node for association and assignment. After association, inserting the **context variable** at the appropriate position in the prompt can incorporate the externally retrieved knowledge into the prompt. +In common knowledge base Q&A applications, the downstream node of knowledge retrieval is typically the LLM node. The **output variable** `result` of knowledge retrieval needs to be configured in the **context variable** within the LLM node for association and assignment. After association, inserting the **context variable** at the appropriate position in the prompt can incorporate the externally retrieved knowledge into the prompt. This variable can be used not only as external knowledge introduced into the prompt context for LLM responses but also supports the application's [citation and attribution](/en/guides/knowledge-base/retrieval-test-and-citation#id-2-citation-and-attribution) feature due to its data structure containing segment reference information. -If the context variable is associated with a common variable from an upstream node, such as a string type variable from the start node, the context variable can still be used as external knowledge, but the **citation and attribution** feature will be disabled. + If the context variable is associated with a common variable from an upstream node, such as a string type variable from the start node, the context variable can still be used as external knowledge, but the **citation and attribution** feature will be disabled. **File Variables** @@ -87,7 +87,7 @@ Some LLMs, such as [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build- To achieve conversational memory in text completion models (e.g., gpt-3.5-turbo-Instruct), Dify designed the conversation history variable in the original [Prompt Expert Mode (discontinued)](/en/learn-more/extended-reading/prompt-engineering/prompt-engineering-1). This variable is carried over to the LLM node in Chatflow, used to insert chat history between the AI and the user into the prompt, helping the LLM understand the context of the conversation. -The conversation history variable is not widely used and can only be inserted when selecting text completion models in Chatflow. + The conversation history variable is not widely used and can only be inserted when selecting text completion models in Chatflow. ![Inserting Conversation History Variable](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/b8642f8c6e3f562fceeefae83628fd68.png) @@ -110,9 +110,12 @@ The main parameter terms are explained as follows: If you do not understand what these parameters are, you can choose to load presets and select from the three presets: Creative, Balanced, and Precise. - + -*** +--- ### Advanced Features @@ -136,188 +139,174 @@ If you do not understand what these parameters are, you can choose to load prese **Structured Outputs**: Ensures LLM returns data in a usable, stable, and predictable format, helping users to control exactly how their LLM nodes return data. + The **JSON Schema Editor** in LLM nodes lets you define how you want your data structured. You can use either the **Visual Editor** for a user-friendly experience or the **JSON Schema** for more precise control. -The **JSON Schema Editor** in LLM nodes lets you define how you want your data structured. You can use either the **Visual Editor** for a user-friendly experience or the **JSON Schema** for more precise control. + + JSON Schema Editor supports structured outputs across all models: - -JSON Schema Editor supports structured outputs across all models: + - Models with Native Support: Can directly use JSON Schema definitions. + - Models without Native Support: Not all models handle structured outputs reliably. We will include your schema in the prompt, but response formatting may vary by model. + + **Get Started** -- Models with Native Support: Can directly use JSON Schema definitions. + Access the editor through **LLM Node \> Output Variables \> Structured \> Configure**. You can switch between visual and JSON Schema editing modes. -- Models without Native Support: Not all models handle structured outputs reliably. We will include your schema in the prompt, but response formatting may vary by model. - + ![JSON Schema Editor](https://assets-docs.dify.ai/2025/04/646805384efa3cd85869d23a4d9735ad.png) -**Get Started** + **_Visual Editor_** -Access the editor through **LLM Node > Output Variables > Structured > Configure**. You can switch between visual and JSON Schema editing modes. + **When to Use** -![JSON Schema Editor](https://assets-docs.dify.ai/2025/04/646805384efa3cd85869d23a4d9735ad.png) + - For simple fields such as `name`, `email`, `age` without nested structures + - If you prefer a drag-and-drop way over writing JSON + - When you need to quickly iterate on your schema structure -***Visual Editor*** + ![Visual Editor](https://assets-docs.dify.ai/2025/04/a9d6a34a7903f81e4d57c7f1d8d0712b.png) -**When to Use** + **Add Fields** -- For simple fields such as `name`, `email`, `age` without nested structures + Click **Add Field** and set parameters below: -- If you prefer a drag-and-drop way over writing JSON + - _(required)_ Field Name + - _(required)_ Field Type: Choose from string, number, object, array, etc. -- When you need to quickly iterate on your schema structure + > Note: Object and array type fields can contain child fields. + - Description: Helps the LLM understand what the field means. + - Required: Ensures the LLM always includes this field in its output. + - Enum: Restricts possible values. For example, to allow only red, green, blue: -![Visual Editor](https://assets-docs.dify.ai/2025/04/a9d6a34a7903f81e4d57c7f1d8d0712b.png) + ```json + { + "type": "string", + "enum": ["red", "green", "blue"] + } + ``` -**Add Fields** + **Manage Fields** -Click **Add Field** and set parameters below: + - To Edit: Hover over a field and click the Edit icon. + - To Delete: Hover over a field and click the Delete icon. -- *(required)* Field Name + > Note: Deleting an object or array removes all its child fields. -- *(required)* Field Type: Choose from string, number, object, array, etc. + **Import from JSON** - > Note: Object and array type fields can contain child fields. + 1. Click **Import from JSON** and paste your example: -- Description: Helps the LLM understand what the field means. + ```json + { + "comment": "This is great!", + "rating": 5 + } + ``` -- Required: Ensures the LLM always includes this field in its output. + 2. Click **Submit** to convert it into a schema. -- Enum: Restricts possible values. For example, to allow only red, green, blue: + **Generate with AI** -```json -{ - "type": "string", - "enum": ["red", "green", "blue"] -} -``` - -**Manage Fields** - -- To Edit: Hover over a field and click the Edit icon. - -- To Delete: Hover over a field and click the Delete icon. - - > Note: Deleting an object or array removes all its child fields. - -**Import from JSON** - -1. Click **Import from JSON** and paste your example: - -```json -{ - "comment": "This is great!", - "rating": 5 -} -``` - -2. Click **Submit** to convert it into a schema. - -**Generate with AI** - -1. Click the AI Generate icon, select a model (like GPT-4o), and describe what you need: + 1. Click the AI Generate icon, select a model (like GPT-4o), and describe what you need: > "I need a JSON Schema for user profiles with username (string), age (number), and interests (array)." -2. Click **Generate** to create a schema: + 2. Click **Generate** to create a schema: -```json -{ - "type": "object", - "properties": { - "username": { - "type": "string" - }, - "age": { - "type": "number" - }, - "interests": { - "type": "array", - "items": { + ```json + { + "type": "object", + "properties": { + "username": { "type": "string" + }, + "age": { + "type": "number" + }, + "interests": { + "type": "array", + "items": { + "type": "string" + } } - } - }, - "required": ["username", "age", "interests"] -} -``` + }, + "required": ["username", "age", "interests"] + } + ``` -***JSON Schema*** + **_JSON Schema_** -**When to Use** + **When to Use** -- For complex fields that need nesting, (e.g., `order_details`, `product_lists`) + - For complex fields that need nesting, (e.g., `order_details`, `product_lists`) + - When you want to import and modify existing JSON Schemas or API examples + - When you need advanced schema features, such as `pattern` (regex matching) or `oneOf` (multiple type support) + - When you want to fine-tune an AI-generated schema to fit your exact requirements -- When you want to import and modify existing JSON Schemas or API examples + ![JSON Schema](https://assets-docs.dify.ai/2025/04/669af808dd9d0d8521a36e14db731cec.png) -- When you need advanced schema features, such as `pattern` (regex matching) or `oneOf` (multiple type support) + **Add Fields** -- When you want to fine-tune an AI-generated schema to fit your exact requirements + 1. Click **Import from JSON** and add your field structure: -![JSON Schema](https://assets-docs.dify.ai/2025/04/669af808dd9d0d8521a36e14db731cec.png) + ```json + { + "name": "username", + "type": "string", + "description": "user's name", + "required": true + } + ``` -**Add Fields** + 2. Click **Save**. Your schema will be validated automatically. -1. Click **Import from JSON** and add your field structure: + **Manage Fields**: Edit field types, descriptions, default values, etc. in the JSON code box, and then click **Save**. -```json -{ - "name": "username", - "type": "string", - "description": "user's name", - "required": true -} -``` + **Import from JSON** -2. Click **Save**. Your schema will be validated automatically. + 1. Click **Import from JSON** and paste your example: -**Manage Fields**: Edit field types, descriptions, default values, etc. in the JSON code box, and then click **Save**. + ```json + { + "comment": "This is great!", + "rating": 5 + } + ``` -**Import from JSON** + 2. Click **Submit** to convert it into a schema. -1. Click **Import from JSON** and paste your example: + **Generate with AI** -```json -{ - "comment": "This is great!", - "rating": 5 -} -``` - -2. Click **Submit** to convert it into a schema. - -**Generate with AI** - -1. Click the AI Generate icon, select a model (like GPT-4o), and describe what you need: + 1. Click the AI Generate icon, select a model (like GPT-4o), and describe what you need: > "I need a JSON Schema for user profiles with username (string), age (number), and interests (array)." -2. Click **Generate** to create a schema: + 2. Click **Generate** to create a schema: -```json -{ - "type": "object", - "properties": { - "username": { - "type": "string" - }, - "age": { - "type": "number" - }, - "interests": { - "type": "array", - "items": { + ```json + { + "type": "object", + "properties": { + "username": { "type": "string" + }, + "age": { + "type": "number" + }, + "interests": { + "type": "array", + "items": { + "type": "string" + } } - } - }, - "required": ["username", "age", "interests"] -} -``` - + }, + "required": ["username", "age", "interests"] + } + ``` -*** +--- #### Use Cases -* **Reading Knowledge Base Content** +- **Reading Knowledge Base Content** To enable workflow applications to read "[Knowledge Base](../../knowledge-base/)" content, such as building an intelligent customer service application, please follow these steps: @@ -330,22 +319,22 @@ To enable workflow applications to read "[Knowledge Base](../../knowledge-base/) The `result` variable output by the Knowledge Retrieval Node also includes segmented reference information. You can view the source of information through the **Citation and Attribution** feature. -Regular variables from upstream nodes can also be filled into context variables, such as string-type variables from the start node, but the **Citation and Attribution** feature will be ineffective. + Regular variables from upstream nodes can also be filled into context variables, such as string-type variables from the start node, but the **Citation and Attribution** feature will be ineffective. -* **Reading Document Files** +- **Reading Document Files** To enable workflow applications to read document contents, such as building a ChatPDF application, you can follow these steps: -* Add a file variable in the "Start" node; -* Add a document extractor node upstream of the LLM node, using the file variable as an input variable; -* Fill in the **output variable** `text` of the document extractor node into the prompt of the LLM node. +- Add a file variable in the "Start" node; +- Add a document extractor node upstream of the LLM node, using the file variable as an input variable; +- Fill in the **output variable** `text` of the document extractor node into the prompt of the LLM node. For more information, please refer to [File Upload](../file-upload). ![](https://assets-docs.dify.ai/2025/04/c74cf0c58aaf1f35e515044deec2a88c.png) -* **Error Handling** +- **Error Handling** When processing information, LLM nodes may encounter errors such as input text exceeding token limits or missing key parameters. Developers can follow these steps to configure exception branches, enabling contingency plans when node errors occur to avoid interrupting the entire flow: @@ -356,21 +345,13 @@ When processing information, LLM nodes may encounter errors such as input text e For more information about exception handling methods, please refer to the [Error Handling](https://docs.dify.ai/guides/workflow/error-handling). -* **Structured Outputs** +- **Structured Outputs** **Case: Customer Information Intake Form** Watch the following video to learn how to use JSON Schema Editor to collect customer information: - + + +## プロンプト最適化 + +この機能は、満足のいかない出力に基づいてプロンプトを反復的に改善することを可能にします。この機能により、ユーザーは前回の実行結果を直接参照しながら指示を洗練でき、プロンプトの作成、モデルの出力、フィードバックの間に閉ループを実現します。 + +### 目的 + +以前、LLMノードは**ステートレス**でした。プロンプトの効果が期待ほどではないの場合、ユーザーは試行錯誤で改善方法を見つける必要がありました。本機能により、以下が可能になります: + +- コンテキスト変数を使用して前回の出力を参照 +- 比較用のために、“理想的な出力”を定義 +- プロンプトジェネレーターUIを使用して最適化されたプロンプトを再生成 + +--- + +### 仕組み + +LLMノードでプロンプトを実行後: + +- システムは“オリジナルプロンプト”とその“最後の出力”を取得 +- これらはコンテキスト変数として提供されます: + - `{{current_prompt}}`:このノード内の現在のプロンプト + - `{{last_run}}`:このノードの前回の入力と出力 + +これらの変数は、反復的な改善のためにモデルに文脈を提供するため、プロンプトエディタに / や {} を使用して直接挿入できます。 + +--- + +### 設定手順 + +1. プロンプトジェネレーターを開く + +最適化アイコンをクリックしてプロンプトジェネレーターを起動します。これにより、以下のような修正指示が自動的に入力されます: + +`このプロンプトの出力は期待通りではありません:{{last_run}}。理想的な出力に基づいてプロンプトを編集してください。` + +![Llm PN](/images/llm.PNG) + +2. 指示をカスタマイズする + +左側のプロンプトエディタで指示を編集し、変更・改善点(例: トーン、構造など)を反映させます。 + +3. 理想的な出力ボックスの使用 + +理想的な出力領域を展開し、サンプルや期待される出力形式を記述します。これはプロンプト再生成中にモデルの参考として使用されます。 + +注意:理想的な出力ボックスには変数を挿入することはできません。これは静的な例のためのものです。 + +例えば、タスクがモデルにニュース記事の簡潔な要約を正確に3つの箇条書きで出力するようにプロンプトを書き直すことであれば、理想的な出力は以下のようになります: + +![Llm PN](/images/llm2.PNG) + +4. 最適化されたプロンプトを生成する + +生成をクリックすると、システムがユーザーの指示と参照出力に基づいてプロンプトを書き換えます。新しいバージョンをすぐにテストできます。 + +--- + +### バージョン管理 + +プロンプト再生成は、新しいバージョンとしてそれぞれ保存されます: + +- 出力エリアには「バージョン1」、「バージョン2」などのラベルの付いたドロップダウンが含まれています。 +- 結果を比較するためにバージョン間で切り替えて比較できます。 +- バージョンが1つだけ存在する場合、ドロップダウンは隠れます。 + +--- + +### メモ + +- `last_run`には、このLLMノード特有の前回の入力/出力が含まれます。 +- これは、コードノードの`current_code`や`error_message`とは異なります。 +- この機能は、ワークフローの連続性を損なうことなく、プロンプトの反復を改善します。 + +プロンプト最適化は、開発者やローコードユーザーが文脈に基づいてプロンプトを微調整する手助けをし、推測を減らしてガイド付きの反復を通じて成果を向上させます。 + {/* Contributing Section DO NOT edit this section! diff --git a/zh-hans/guides/workflow/node/code.mdx b/zh-hans/guides/workflow/node/code.mdx index 7155fabe..266ec61f 100644 --- a/zh-hans/guides/workflow/node/code.mdx +++ b/zh-hans/guides/workflow/node/code.mdx @@ -121,13 +121,9 @@ def main() -> dict: ![Cloudflare WAF](https://assets-docs.dify.ai/dify-enterprise-mintlify/zh_CN/guides/workflow/node/d1fe121991c51b26b66d42a55b18fb57.png) -{/* -Contributing Section -DO NOT edit this section! -It will be automatically generated by the script. -*/} +## 代码修复 -代码修复功能通过利用上次运行`current_codeerror_message`变量实现自动代码纠正。 +此功能通过利用上次运行`current_codeerror_message`变量实现自动代码纠正。 当代码节点运行失败时: @@ -135,45 +131,39 @@ It will be automatically generated by the script. - 这些信息会作为上下文变量传递到提示中。 - 系统会生成一个新版本的代码供审查和重试。 -## 配置: +**配置:** 1. **编写修复提示:** 你可以自定义一个提示,例如: -在提示编辑器中,使用变量插入菜单(“/”或“{”)插入变量。 - -根据以下错误信息修复代码: +在提示编辑器中,使用变量插入菜单(`/`或`{`)插入变量。 +`根据以下错误信息修复代码: 代码: - {{current_code}} - -{{current_code}} - 错误: - -{{error_message}} +{{error_message}}` ![Codefix PN](/images/codefix.PNG) -**使用上下文变量(如果在工作流程后续需要)** +2. **使用上下文变量(如果在工作流程后续需要)** 要启用自动代码修复,请在提示中引用以下上下文变量: -- {current_code}:此节点上次运行的代码。 -- {error_message}:如果上次运行失败,则为错误消息。 +- `{{current_code}}`:此节点上次运行的代码。 +- `{{error_message}}`:如果上次运行失败,则为错误消息。 你还可以引用任何前置节点的输出变量。 当代码节点运行时,这些变量会自动可用,并允许模型使用先前的运行信息进行迭代修正。 -## 使用版本管理: +3. **版本管理** - 每次修正尝试都会保存为一个单独的版本(例如,版本1、版本2)。 - 用户可以通过结果显示区域的下拉菜单在不同版本间切换。 -注意事项: +**注意事项:** - 如果上次运行成功,error_message为空。 - last_run可用于引用先前的输入/输出。 @@ -182,4 +172,10 @@ It will be automatically generated by the script. --- +{/* +Contributing Section +DO NOT edit this section! +It will be automatically generated by the script. +*/} + [编辑此页面](https://github.com/langgenius/dify-docs/edit/main/zh-hans/guides/workflow/node/code.mdx) | [提交问题](https://github.com/langgenius/dify-docs/issues/new?template=docs.yml) \ No newline at end of file diff --git a/zh-hans/guides/workflow/node/llm.mdx b/zh-hans/guides/workflow/node/llm.mdx index bf5b6edd..f80c7f92 100644 --- a/zh-hans/guides/workflow/node/llm.mdx +++ b/zh-hans/guides/workflow/node/llm.mdx @@ -384,15 +384,11 @@ LLM 节点处理信息时有可能会遇到输入文本超过 Token 限制,未