diff --git a/en/guides/workflow/node/code.mdx b/en/guides/workflow/node/code.mdx index 366bacb2..6c98ad39 100644 --- a/en/guides/workflow/node/code.mdx +++ b/en/guides/workflow/node/code.mdx @@ -1,14 +1,13 @@ --- -title: Code Execution +title: "Code Execution" --- - ## Table of Contents -* [Introduction](#introduction) -* [Usage Scenarios](#usage-scenarios) -* [Local Deployment](#local-deployment) -* [Security Policies](#security-policies) +- [Introduction](#introduction) +- [Usage Scenarios](#usage-scenarios) +- [Local Deployment](#local-deployment) +- [Security Policies](#security-policies) ## Introduction @@ -16,7 +15,10 @@ The code node supports running Python/NodeJS code to perform data transformation This node significantly enhances the flexibility for developers, allowing them to embed custom Python or JavaScript scripts within the workflow and manipulate variables in ways that preset nodes cannot achieve. Through configuration options, you can specify the required input and output variables and write the corresponding execution code: - + ## Configuration @@ -113,8 +115,8 @@ def main() -> dict: This code snippet has the following issues: -* **Unauthorized file access:** The code attempts to read the "/etc/passwd" file, which is a critical system file in Unix/Linux systems that stores user account information. -* **Sensitive information disclosure:** The "/etc/passwd" file contains important information about system users, such as usernames, user IDs, group IDs, home directory paths, etc. Direct access could lead to information leakage. +- **Unauthorized file access:** The code attempts to read the "/etc/passwd" file, which is a critical system file in Unix/Linux systems that stores user account information. +- **Sensitive information disclosure:** The "/etc/passwd" file contains important information about system users, such as usernames, user IDs, group IDs, home directory paths, etc. Direct access could lead to information leakage. Dangerous code will be automatically blocked by Cloudflare WAF. You can check if it's been blocked by looking at the "Network" tab in your browser's "Web Developer Tools". @@ -126,7 +128,48 @@ DO NOT edit this section! It will be automatically generated by the script. */} +The **Code Fix** feature enables **automatic code correction** by leveraging the previous run’s `current_code` and `error_message` variables. + +# When a Code Node fails: + +- The system captures the code and error message. +- These are passed into the prompt as context variables. +- A new version of the code is generated for review and retry. + +**Configuration:** + +1. **Write Repair Prompt**: + + In the prompt editor, use the variable insertion menu (`/` or `{`) to insert variables. You may customize a prompt like: + +`Fix the following code based on this error message: Code: {{current_code}} Error: {{error_message}}` + + + +2. **Using Context Variables (if needed later in workflow)** + +To enable automatic code repair, reference the following **context variables** in your prompt: + +- `{{current_code}}`: The code from the last run of this node. +- `{{error_message}}`: The error message if the last run failed. + +You can also reference output variables from any predecessor nodes. + +These variables are automatically available when the Code Node is run and allow the model to use prior run information for iterative correction. + +3. **Use Version Management**: + 1. Each correction attempt is saved as a separate version (e.g., Version 1, Version 2). + 2. Users can **switch between versions** via the dropdown in the result display area. + +**Notes:** + +- `error_message` is empty if the last run succeeds. +- `last_run` can be used to reference previous input/output. + +This reduces manual copy-paste and allows iterative debugging directly within the workflow. + --- -[Edit this page](https://github.com/langgenius/dify-docs/edit/main/en/guides/workflow/node/code.mdx) | [Report an issue](https://github.com/langgenius/dify-docs/issues/new?template=docs.yml) - +[Edit this page](https://github.com/langgenius/dify-docs/edit/main/en/guides/workflow/node/code.mdx) | [Report an issue](https://github.com/langgenius/dify-docs/issues/new?template=docs.yml) \ No newline at end of file diff --git a/en/guides/workflow/node/llm.mdx b/en/guides/workflow/node/llm.mdx index feca8427..6e492448 100644 --- a/en/guides/workflow/node/llm.mdx +++ b/en/guides/workflow/node/llm.mdx @@ -1,5 +1,5 @@ --- -title: LLM +title: "LLM" --- ### Definition @@ -8,24 +8,24 @@ Invokes the capabilities of large language models to process information input b ![LLM Node](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/85730fbfa1d441d12d969b89adf2670e.png) -*** +--- ### Scenarios LLM is the core node of Chatflow/Workflow, utilizing the conversational/generative/classification/processing capabilities of large language models to handle a wide range of tasks based on given prompts and can be used in different stages of workflows. -* **Intent Recognition**: In customer service scenarios, identifying and classifying user inquiries to guide downstream processes. -* **Text Generation**: In content creation scenarios, generating relevant text based on themes and keywords. -* **Content Classification**: In email batch processing scenarios, automatically categorizing emails, such as inquiries/complaints/spam. -* **Text Conversion**: In translation scenarios, translating user-provided text into a specified language. -* **Code Generation**: In programming assistance scenarios, generating specific business code or writing test cases based on user requirements. -* **RAG**: In knowledge base Q\&A scenarios, reorganizing retrieved relevant knowledge to respond to user questions. -* **Image Understanding**: Using multimodal models with vision capabilities to understand and answer questions about the information within images. -* **File Analysis**: In file processing scenarios, use LLMs to recognize and analyze the information contained within files. +- **Intent Recognition**: In customer service scenarios, identifying and classifying user inquiries to guide downstream processes. +- **Text Generation**: In content creation scenarios, generating relevant text based on themes and keywords. +- **Content Classification**: In email batch processing scenarios, automatically categorizing emails, such as inquiries/complaints/spam. +- **Text Conversion**: In translation scenarios, translating user-provided text into a specified language. +- **Code Generation**: In programming assistance scenarios, generating specific business code or writing test cases based on user requirements. +- **RAG**: In knowledge base Q&A scenarios, reorganizing retrieved relevant knowledge to respond to user questions. +- **Image Understanding**: Using multimodal models with vision capabilities to understand and answer questions about the information within images. +- **File Analysis**: In file processing scenarios, use LLMs to recognize and analyze the information contained within files. By selecting the appropriate model and writing prompts, you can build powerful and reliable solutions within Chatflow/Workflow. -*** +--- ### How to Configure @@ -39,7 +39,7 @@ By selecting the appropriate model and writing prompts, you can build powerful a 4. **Advanced Settings**: You can enable memory, set memory windows, and use the Jinja-2 template language for more complex prompts. -If you are using Dify for the first time, you need to complete the [model configuration](/en/guides/model-configuration) in **System Settings-Model Providers** before selecting a model in the LLM node. + If you are using Dify for the first time, you need to complete the [model configuration](/en/guides/model-configuration) in **System Settings-Model Providers** before selecting a model in the LLM node. #### **Writing Prompts** @@ -56,7 +56,7 @@ In the prompt editor, you can call out the **variable insertion menu** by typing ![Calling Out the Variable Insertion Menu](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/d8ed0160a7fba0a14dd823ef97610cc4.png) -*** +--- ### Explanation of Special Variables @@ -66,12 +66,12 @@ Context variables are a special type of variable defined within the LLM node, us ![Context Variables](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/5aefed96962bd994f8f05bac96b11e22.png) -In common knowledge base Q\&A applications, the downstream node of knowledge retrieval is typically the LLM node. The **output variable** `result` of knowledge retrieval needs to be configured in the **context variable** within the LLM node for association and assignment. After association, inserting the **context variable** at the appropriate position in the prompt can incorporate the externally retrieved knowledge into the prompt. +In common knowledge base Q&A applications, the downstream node of knowledge retrieval is typically the LLM node. The **output variable** `result` of knowledge retrieval needs to be configured in the **context variable** within the LLM node for association and assignment. After association, inserting the **context variable** at the appropriate position in the prompt can incorporate the externally retrieved knowledge into the prompt. This variable can be used not only as external knowledge introduced into the prompt context for LLM responses but also supports the application's [citation and attribution](/en/guides/knowledge-base/retrieval-test-and-citation#id-2-citation-and-attribution) feature due to its data structure containing segment reference information. -If the context variable is associated with a common variable from an upstream node, such as a string type variable from the start node, the context variable can still be used as external knowledge, but the **citation and attribution** feature will be disabled. + If the context variable is associated with a common variable from an upstream node, such as a string type variable from the start node, the context variable can still be used as external knowledge, but the **citation and attribution** feature will be disabled. **File Variables** @@ -87,7 +87,7 @@ Some LLMs, such as [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build- To achieve conversational memory in text completion models (e.g., gpt-3.5-turbo-Instruct), Dify designed the conversation history variable in the original [Prompt Expert Mode (discontinued)](/en/learn-more/extended-reading/prompt-engineering/prompt-engineering-1). This variable is carried over to the LLM node in Chatflow, used to insert chat history between the AI and the user into the prompt, helping the LLM understand the context of the conversation. -The conversation history variable is not widely used and can only be inserted when selecting text completion models in Chatflow. + The conversation history variable is not widely used and can only be inserted when selecting text completion models in Chatflow. ![Inserting Conversation History Variable](https://assets-docs.dify.ai/dify-enterprise-mintlify/en/guides/workflow/node/b8642f8c6e3f562fceeefae83628fd68.png) @@ -110,9 +110,12 @@ The main parameter terms are explained as follows: If you do not understand what these parameters are, you can choose to load presets and select from the three presets: Creative, Balanced, and Precise. - + -*** +--- ### Advanced Features @@ -136,188 +139,174 @@ If you do not understand what these parameters are, you can choose to load prese **Structured Outputs**: Ensures LLM returns data in a usable, stable, and predictable format, helping users to control exactly how their LLM nodes return data. + The **JSON Schema Editor** in LLM nodes lets you define how you want your data structured. You can use either the **Visual Editor** for a user-friendly experience or the **JSON Schema** for more precise control. -The **JSON Schema Editor** in LLM nodes lets you define how you want your data structured. You can use either the **Visual Editor** for a user-friendly experience or the **JSON Schema** for more precise control. + + JSON Schema Editor supports structured outputs across all models: - -JSON Schema Editor supports structured outputs across all models: + - Models with Native Support: Can directly use JSON Schema definitions. + - Models without Native Support: Not all models handle structured outputs reliably. We will include your schema in the prompt, but response formatting may vary by model. + + **Get Started** -- Models with Native Support: Can directly use JSON Schema definitions. + Access the editor through **LLM Node \> Output Variables \> Structured \> Configure**. You can switch between visual and JSON Schema editing modes. -- Models without Native Support: Not all models handle structured outputs reliably. We will include your schema in the prompt, but response formatting may vary by model. - + ![JSON Schema Editor](https://assets-docs.dify.ai/2025/04/646805384efa3cd85869d23a4d9735ad.png) -**Get Started** + **_Visual Editor_** -Access the editor through **LLM Node > Output Variables > Structured > Configure**. You can switch between visual and JSON Schema editing modes. + **When to Use** -![JSON Schema Editor](https://assets-docs.dify.ai/2025/04/646805384efa3cd85869d23a4d9735ad.png) + - For simple fields such as `name`, `email`, `age` without nested structures + - If you prefer a drag-and-drop way over writing JSON + - When you need to quickly iterate on your schema structure -***Visual Editor*** + ![Visual Editor](https://assets-docs.dify.ai/2025/04/a9d6a34a7903f81e4d57c7f1d8d0712b.png) -**When to Use** + **Add Fields** -- For simple fields such as `name`, `email`, `age` without nested structures + Click **Add Field** and set parameters below: -- If you prefer a drag-and-drop way over writing JSON + - _(required)_ Field Name + - _(required)_ Field Type: Choose from string, number, object, array, etc. -- When you need to quickly iterate on your schema structure + > Note: Object and array type fields can contain child fields. + - Description: Helps the LLM understand what the field means. + - Required: Ensures the LLM always includes this field in its output. + - Enum: Restricts possible values. For example, to allow only red, green, blue: -![Visual Editor](https://assets-docs.dify.ai/2025/04/a9d6a34a7903f81e4d57c7f1d8d0712b.png) + ```json + { + "type": "string", + "enum": ["red", "green", "blue"] + } + ``` -**Add Fields** + **Manage Fields** -Click **Add Field** and set parameters below: + - To Edit: Hover over a field and click the Edit icon. + - To Delete: Hover over a field and click the Delete icon. -- *(required)* Field Name + > Note: Deleting an object or array removes all its child fields. -- *(required)* Field Type: Choose from string, number, object, array, etc. + **Import from JSON** - > Note: Object and array type fields can contain child fields. + 1. Click **Import from JSON** and paste your example: -- Description: Helps the LLM understand what the field means. + ```json + { + "comment": "This is great!", + "rating": 5 + } + ``` -- Required: Ensures the LLM always includes this field in its output. + 2. Click **Submit** to convert it into a schema. -- Enum: Restricts possible values. For example, to allow only red, green, blue: + **Generate with AI** -```json -{ - "type": "string", - "enum": ["red", "green", "blue"] -} -``` - -**Manage Fields** - -- To Edit: Hover over a field and click the Edit icon. - -- To Delete: Hover over a field and click the Delete icon. - - > Note: Deleting an object or array removes all its child fields. - -**Import from JSON** - -1. Click **Import from JSON** and paste your example: - -```json -{ - "comment": "This is great!", - "rating": 5 -} -``` - -2. Click **Submit** to convert it into a schema. - -**Generate with AI** - -1. Click the AI Generate icon, select a model (like GPT-4o), and describe what you need: + 1. Click the AI Generate icon, select a model (like GPT-4o), and describe what you need: > "I need a JSON Schema for user profiles with username (string), age (number), and interests (array)." -2. Click **Generate** to create a schema: + 2. Click **Generate** to create a schema: -```json -{ - "type": "object", - "properties": { - "username": { - "type": "string" - }, - "age": { - "type": "number" - }, - "interests": { - "type": "array", - "items": { + ```json + { + "type": "object", + "properties": { + "username": { "type": "string" + }, + "age": { + "type": "number" + }, + "interests": { + "type": "array", + "items": { + "type": "string" + } } - } - }, - "required": ["username", "age", "interests"] -} -``` + }, + "required": ["username", "age", "interests"] + } + ``` -***JSON Schema*** + **_JSON Schema_** -**When to Use** + **When to Use** -- For complex fields that need nesting, (e.g., `order_details`, `product_lists`) + - For complex fields that need nesting, (e.g., `order_details`, `product_lists`) + - When you want to import and modify existing JSON Schemas or API examples + - When you need advanced schema features, such as `pattern` (regex matching) or `oneOf` (multiple type support) + - When you want to fine-tune an AI-generated schema to fit your exact requirements -- When you want to import and modify existing JSON Schemas or API examples + ![JSON Schema](https://assets-docs.dify.ai/2025/04/669af808dd9d0d8521a36e14db731cec.png) -- When you need advanced schema features, such as `pattern` (regex matching) or `oneOf` (multiple type support) + **Add Fields** -- When you want to fine-tune an AI-generated schema to fit your exact requirements + 1. Click **Import from JSON** and add your field structure: -![JSON Schema](https://assets-docs.dify.ai/2025/04/669af808dd9d0d8521a36e14db731cec.png) + ```json + { + "name": "username", + "type": "string", + "description": "user's name", + "required": true + } + ``` -**Add Fields** + 2. Click **Save**. Your schema will be validated automatically. -1. Click **Import from JSON** and add your field structure: + **Manage Fields**: Edit field types, descriptions, default values, etc. in the JSON code box, and then click **Save**. -```json -{ - "name": "username", - "type": "string", - "description": "user's name", - "required": true -} -``` + **Import from JSON** -2. Click **Save**. Your schema will be validated automatically. + 1. Click **Import from JSON** and paste your example: -**Manage Fields**: Edit field types, descriptions, default values, etc. in the JSON code box, and then click **Save**. + ```json + { + "comment": "This is great!", + "rating": 5 + } + ``` -**Import from JSON** + 2. Click **Submit** to convert it into a schema. -1. Click **Import from JSON** and paste your example: + **Generate with AI** -```json -{ - "comment": "This is great!", - "rating": 5 -} -``` - -2. Click **Submit** to convert it into a schema. - -**Generate with AI** - -1. Click the AI Generate icon, select a model (like GPT-4o), and describe what you need: + 1. Click the AI Generate icon, select a model (like GPT-4o), and describe what you need: > "I need a JSON Schema for user profiles with username (string), age (number), and interests (array)." -2. Click **Generate** to create a schema: + 2. Click **Generate** to create a schema: -```json -{ - "type": "object", - "properties": { - "username": { - "type": "string" - }, - "age": { - "type": "number" - }, - "interests": { - "type": "array", - "items": { + ```json + { + "type": "object", + "properties": { + "username": { "type": "string" + }, + "age": { + "type": "number" + }, + "interests": { + "type": "array", + "items": { + "type": "string" + } } - } - }, - "required": ["username", "age", "interests"] -} -``` - + }, + "required": ["username", "age", "interests"] + } + ``` -*** +--- #### Use Cases -* **Reading Knowledge Base Content** +- **Reading Knowledge Base Content** To enable workflow applications to read "[Knowledge Base](../../knowledge-base/)" content, such as building an intelligent customer service application, please follow these steps: @@ -330,22 +319,22 @@ To enable workflow applications to read "[Knowledge Base](../../knowledge-base/) The `result` variable output by the Knowledge Retrieval Node also includes segmented reference information. You can view the source of information through the **Citation and Attribution** feature. -Regular variables from upstream nodes can also be filled into context variables, such as string-type variables from the start node, but the **Citation and Attribution** feature will be ineffective. + Regular variables from upstream nodes can also be filled into context variables, such as string-type variables from the start node, but the **Citation and Attribution** feature will be ineffective. -* **Reading Document Files** +- **Reading Document Files** To enable workflow applications to read document contents, such as building a ChatPDF application, you can follow these steps: -* Add a file variable in the "Start" node; -* Add a document extractor node upstream of the LLM node, using the file variable as an input variable; -* Fill in the **output variable** `text` of the document extractor node into the prompt of the LLM node. +- Add a file variable in the "Start" node; +- Add a document extractor node upstream of the LLM node, using the file variable as an input variable; +- Fill in the **output variable** `text` of the document extractor node into the prompt of the LLM node. For more information, please refer to [File Upload](../file-upload). ![](https://assets-docs.dify.ai/2025/04/c74cf0c58aaf1f35e515044deec2a88c.png) -* **Error Handling** +- **Error Handling** When processing information, LLM nodes may encounter errors such as input text exceeding token limits or missing key parameters. Developers can follow these steps to configure exception branches, enabling contingency plans when node errors occur to avoid interrupting the entire flow: @@ -356,21 +345,13 @@ When processing information, LLM nodes may encounter errors such as input text e For more information about exception handling methods, please refer to the [Error Handling](https://docs.dify.ai/guides/workflow/error-handling). -* **Structured Outputs** +- **Structured Outputs** **Case: Customer Information Intake Form** Watch the following video to learn how to use JSON Schema Editor to collect customer information: - + +