Add workflow 101 tutorial (#705)

* add the zh and en workflow 101 tutorials

* refine the formats for readability

* refine formatting and minor issues

* Update Workflow 101 lessons and images

* Update image display in Workflow 101 lesson 01

* Rename Dify workflow image

* Update Workflow 101 Lesson 3 with new images and prompt example

* Update Workflow 101 Lesson 03 images

* Update Dify workflow lesson 4 prompt image

* Update Workflow 101 Lesson 5 with new images and content

* Update Workflow 101 Lesson 6 content and images

* Update Workflow 101 Lesson 08 and add new images

* Refine email reply instruction in Workflow 101 lesson 8

* Update Workflow 101 Lesson 9 with new steps and images

* Update Workflow 101 Lesson 09 content and images

* Fix whitespace in Workflow 101 Lesson 9

* Add image alt text to workflow example in zh lesson 01

* Update LLM node images and text formatting in Workflow 101 Lesson 03

* Update zh/use-dify/tutorials/workflow-101/lesson-05.mdx content

* Update zh/workflow-101/lesson-03.mdx for clarity

* Remove unnecessary italics in Workflow 101 lesson 4

* Update Workflow 101 Lesson 5 for clarity and image display

* Update images in Workflow 101 Lesson 06

* Update Lesson 7 workflow tutorial with new images and text fixes

* Update zh/use-dify/tutorials/workflow-101/lesson-06.mdx content

* Update prompt instructions in Workflow 101 Lesson 06

* Clarify condition in Workflow 101 Lesson 07

* Fix formatting in Workflow 101 lesson 8

* Update Workflow 101 Lesson 9 for email formatting optimization

* Fix typo in Workflow 101 Lesson 9

* Remove Jinja2 example from workflow lesson

* Update image syntax in Workflow 101 Lesson 02

* Update image tag in Workflow 101 Lesson 3

* Remove italics from workflow 101 lessons

* Fix typo in zh/use-dify/tutorials/workflow-101/lesson-05.mdx

* Refine parameter types explanation in Workflow 101 lesson 6

* Refine Chinese text in Workflow 101 lesson 06

* Update Lesson 7: Enhance Workflows content

* Fix formatting in Workflow 101 Lesson 7

* Refine Chinese text in Workflow 101 lesson 8

* Refine instructions for Workflow 101 Lesson 08 test run

* Clarify the purpose of template conversion in Workflow 101 lesson 9

* Update Workflow 101 Lesson 10 image syntax

* Polish workflow 101 tutorials (en/zh) and add ja translation

* Update Workflow 101 lesson 1 content

* Fix formatting and update workflow creation instructions in Lesson 02

* Clarify multi-modal model description in Workflow 101 lesson 3

* Remove italics from workflow tutorial lessons

* Update Workflow 101 Lesson 6 prompt example

* Update formatting in Workflow 101 lesson 8

* Update Workflow 101 lesson 1 content

* Update zh/use-dify/tutorials/workflow-101/lesson-02.mdx content

* Refine Chinese text in Workflow 101 Lesson 3

* Refine RAG explanation in Workflow 101 Lesson 04

* Update zh/use-dify/tutorials/workflow-101/lesson-05.mdx content

* Update formatting in Workflow 101 Lesson 08

* final checks

* format and terminology fixes

---------

Co-authored-by: Anne <annezj92@gmail.com>
This commit is contained in:
Riskey
2026-03-11 15:42:44 +08:00
committed by GitHub
parent 41be33b22f
commit df19776bb0
137 changed files with 5117 additions and 0 deletions

2
.gitignore vendored
View File

@@ -4,3 +4,5 @@
__pycache__/
CLAUDE.md
AGENTS.md
.claude/CLAUDE.local.md
.claude/settings.local.json

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

View File

@@ -229,6 +229,21 @@
"group": "Tutorials",
"expanded": false,
"pages": [
{
"group": "Workflow 101",
"pages": [
"en/use-dify/tutorials/workflow-101/lesson-01",
"en/use-dify/tutorials/workflow-101/lesson-02",
"en/use-dify/tutorials/workflow-101/lesson-03",
"en/use-dify/tutorials/workflow-101/lesson-04",
"en/use-dify/tutorials/workflow-101/lesson-05",
"en/use-dify/tutorials/workflow-101/lesson-06",
"en/use-dify/tutorials/workflow-101/lesson-07",
"en/use-dify/tutorials/workflow-101/lesson-08",
"en/use-dify/tutorials/workflow-101/lesson-09",
"en/use-dify/tutorials/workflow-101/lesson-10"
]
},
"en/use-dify/tutorials/simple-chatbot",
"en/use-dify/tutorials/twitter-chatflow",
"en/use-dify/tutorials/customer-service-bot",
@@ -616,6 +631,21 @@
"group": "教程",
"expanded": false,
"pages": [
{
"group": "工作流 101",
"pages": [
"zh/use-dify/tutorials/workflow-101/lesson-01",
"zh/use-dify/tutorials/workflow-101/lesson-02",
"zh/use-dify/tutorials/workflow-101/lesson-03",
"zh/use-dify/tutorials/workflow-101/lesson-04",
"zh/use-dify/tutorials/workflow-101/lesson-05",
"zh/use-dify/tutorials/workflow-101/lesson-06",
"zh/use-dify/tutorials/workflow-101/lesson-07",
"zh/use-dify/tutorials/workflow-101/lesson-08",
"zh/use-dify/tutorials/workflow-101/lesson-09",
"zh/use-dify/tutorials/workflow-101/lesson-10"
]
},
"zh/use-dify/tutorials/simple-chatbot",
"zh/use-dify/tutorials/twitter-chatflow",
"zh/use-dify/tutorials/customer-service-bot",
@@ -1003,6 +1033,21 @@
"group": "チュートリアル",
"expanded": false,
"pages": [
{
"group": "ワークフロー 101",
"pages": [
"ja/use-dify/tutorials/workflow-101/lesson-01",
"ja/use-dify/tutorials/workflow-101/lesson-02",
"ja/use-dify/tutorials/workflow-101/lesson-03",
"ja/use-dify/tutorials/workflow-101/lesson-04",
"ja/use-dify/tutorials/workflow-101/lesson-05",
"ja/use-dify/tutorials/workflow-101/lesson-06",
"ja/use-dify/tutorials/workflow-101/lesson-07",
"ja/use-dify/tutorials/workflow-101/lesson-08",
"ja/use-dify/tutorials/workflow-101/lesson-09",
"ja/use-dify/tutorials/workflow-101/lesson-10"
]
},
"ja/use-dify/tutorials/twitter-chatflow",
"ja/use-dify/tutorials/customer-service-bot",
"ja/use-dify/tutorials/build-ai-image-generation-app",

View File

@@ -0,0 +1,49 @@
---
title: "Lesson 1: What is a Workflow?"
---
## 👋 Welcome to Dify 101
We are going to take you from Zero to Hero. By the end of this course, you will build your very own Advanced AI Email Assistant.
Let's leave coding behind for a second and talk about cooking.
Imagine you want to cook a dish that you haven't made before. To make that happen, you need a **Recipe**. A recipe is just like a workflow! It tells you exactly what to do, in what order, to get the dish you want.
## Meet Workflow
In Dify, you are the head chef who writes a Recipe for the AI to follow. Here're the things you need to prepare beforehand:
1. Input (Ingredients): The information you give the AI. This could be a user's question, a PDF document, or a messy email draft.
2. Process (Instructions): The steps you force the AI to take. For example: First, summarize this text. Next, translate it into Spanish. Finally, format it as a LinkedIn post.
3. Output (The Dish): The final result the AI hands back to you.
To sum up, a workflow is a flowchart that asks AI to complete tasks in a specific order.
This is a Smart ID Scanner workflow. Its job is to extract the information on front and back of an ID card, then send these texts back to you.
<Frame>
![Workflow Example](/images/difyworkflow101-lesson01-workflowexample.png)
</Frame>
### Node
Let's have a closer look of the workflow above. That whole process is simply made up of a few connected steps: **Uploading the image, Extracting the information, and Combining the results**.
Each of these steps is called a **Node**.
Think of them like runners in a relay race: each node has a specific task. Once it finishes its turn, it passes the baton to the next node in line.
Dify offers you a box of ready-to-use nodes, such as the LLM, Knowledge Retrieval, If/Else, Tools, etc.
You can connect these nodes just by dragging and dropping—it's like building with the Lego blocks! You can easily snap them together to create a powerful automated workflow.
## It's Your Turn
1. Go [Dify](https://dify.ai/) and click **Get Started** in the upper right corner.
2. Click on Explore. This is a library which collects different workflow of different scenarios.
<Frame>
![Explore](/images/difyworkflow101-lesson01-explore.png)
</Frame>
3. Pick a template that looks like a right fit for you. Don't worry if you don't understand every setting yet—just look at how the nodes are connected.

View File

@@ -0,0 +1,168 @@
---
title: "Lesson 2: Head and Tail (Start & Output Node)"
sidebarTitle: "Lesson 2: Head and Tail"
---
In the last lesson, we compared a Workflow to a Recipe. Today, we are stepping into the professional kitchen to prep our ingredients (Start) and get our serving plates ready (Output).
## Create the App
<Steps>
<Step title="Create from Blank">
Click on **Studio** at the top of the screen. Under Create App on the left, click **Create from Blank**.
<Frame>
![Create the App](/images/difyworkflow101-lesson02-creatingtheapp.png)
</Frame>
</Step>
<Step title="Configure App Type">
Select **Workflow** as the app type, fill up **App Name & Icon**, then click **Create**.
<Frame>
![App Name & Icon](/images/difyworkflow101-lesson02-createworkflow.png)
</Frame>
</Step>
<Step title="Choose Start Node Type">
Click User Input, and you'll see a new popup window. There are two options here that decide how your app starts running:
- **User Input**
This is **Manual Mode**. The workflow only starts working when you (the user) type something into the chat box.
Best for: Most AI apps. For example, chatbots, writing assistants, translation, etc.
- **Trigger**
This is **Automatic Mode**. It runs automatically based on a signal (like 8:00 AM every morning, or a specific event).
Best for: Repetitive task that runs on a specific time, or run this workflow after a task is completed else where. For example, daily news summary.
<Frame>
![Trigger](/images/difyworkflow101-lesson02-trigger.png)
</Frame>
</Step>
</Steps>
## Meet the Orchestration Canvas
After selecting the Start node, you will see a large blank area. This is your orchestration canvas where you will design, build, and test your workflow.
<Frame>
![Orchestration Studio](/images/difyworkflow101-lesson02-orchestrationstudio.png)
</Frame>
Remember the Nodes we learned in Lesson 1? The user input node you see on the canvas now is where everything begins.
Every complete workflow relies on a basic skeleton: the Start Node (The Head) and the Output Node (The Tail).
## The Start Node
<Frame>
![Start Node](/images/difyworkflow101-lesson02-startnode.png)
</Frame>
The Start Node is the only entrance to your entire workflow. It's like the Prep Ingredients step in cooking. Its job is to define what information the workflow needs to receive from the user to get started.
We just selected **User Input** as our Start Node.
### Core Concept: Variables
Inside the Start Node, you will see the word **Variable**. Don't panic! You can think of a variable as a **Storage Box with a Label**.
Each box is designed to hold a specific type of information:
For example, if you are building a Travel Planner, you need the user to provide two pieces of information: `Destination` and `Travel Days`.
User A might want to go to Japan for 5 days. User B might want to go to Paris for 3 days.
Every user provides different content, so every time the app runs, the stuff inside these boxes changes.
This is the meaning of a Variable—digging a hole for the user to fill, helping your workflow to handle different requests flexibly every time.
## The End Node (Output)
<Frame>
![Output](/images/difyworkflow101-lesson02-output.png)
</Frame>
This is the finish line of the workflow. Think of it as Serving the Dish and it defines what the user actually sees at the very end.
For example, remember that Travel Planner we talked about? If the user inputs Destination: Paris and Duration: 5 Days in the User Input Node. The Output Node is where the system finally hands over the result: Here is your complete 5-Day Itinerary for Paris.
To sum up, the Start Node and End Node define the basic input and output, shaping the skeleton of your app.
## Hands-On Practice: Start Building an AI Email Assistant
Let's build the basic framework for an AI Assistant that helps you write emails.
<Steps>
<Step title="Create the App">
You can either:
- Continue on the canvas you just opened, or
- Go back to Studio → Create Blank App → select Workflow, and name it Email Assistant (Remember to select **User Input** in the popup!)
</Step>
<Step title="Configure the Start Node (Prep Ingredients)">
If you need AI to help you with a email reply, what information do you need to give it?
That's right: usually the Customer's Name and the Original Email Content.
1. Click on the **Start** node. In the panel on the right, look for **Input Field** and click the **\+** button.
<Frame>
![User Input Field](/images/difyworkflow101-lesson02-inputfield.png)
</Frame>
2. In the popup, we will create two variables (two storage boxes):
**Variable 1 (For the Customer Name)**
<Frame>
![Add First Variable](/images/difyworkflow101-lesson02-variable1.png)
</Frame>
- Field Type: Text (Short Text)
- Variable Name: `customer_name`
- Label Name: Customer Name
- Keep other options as default
**Variable 2 (For the Email Content)**
<Frame>
![Add Second Variable](/images/difyworkflow101-lesson02-variable2.png)
</Frame>
- Field Type: Click the dropdown and select **Paragraph** (Since emails are usually long, a Paragraph box is bigger and holds more text)
- Variable Name: `email_content`
- Label Name: Original Email
- Max Length: Manually change this to **2000** to ensure it fits long emails
<Tip>
**Variable Name vs. Label Name**
You might notice we had to fill in two names. What's the difference?
- **Variable Name**: This is the ID card for the system. It must be unique, use English letters, and cannot have spaces.
- **Label Name**: This is the Label for the users. You can name it with any language (English, Chinese, etc.). It will be shown on the screen.
</Tip>
</Step>
<Step title="Create the End Node (Set the Goal)">
Right-click anywhere on the blank white space of the canvas. Select **Add Node** and select **Output** from the list.
<Frame>
![Create the End Node](/images/difyworkflow101-lesson02-createendnode.png)
</Frame>
</Step>
</Steps>
Here's everything on your canvas: a **Start Node** ready to receive a name and an email, and an **Output Node** waiting to send the final result.
<Frame>
![Start Node and Output](/images/difyworkflow101-lesson02-basicworkflow.png)
</Frame>
We have successfully built basic frame of the workflow. The empty space in the middle is where we will place the LLM (AI Brain) Node in the next lesson to process this information.
## Mini Challenge
**Task**: If you needed to create a Travel Plan Generator, what variables should the Start Node include?
<Tip>
Try exploring the Field Types in **Add Variable**.
</Tip>

View File

@@ -0,0 +1,260 @@
---
title: "Lesson 3: The Brain of the Workflow (LLM Node)"
sidebarTitle: "Lesson 3: The Brain of the Workflow"
---
<Frame>
![LLM Node](/images/difyworkflow101-lesson03-llmnode.png)
</Frame>
In Lesson 2, we set up the Ingredients (Start Node) and the Serving Plate (Output Node).
If the Start Node is the prep cook, the LLM Node is the Master Chef. It is the brain and core of your workflow.
It handles all the thinking, analyzing, and creative writing. Whether you want to summarize an article, write code, or draft an email, this is the node that does the heavy lifting.
## Configure the Model
Before getting started, we need to connect to a model provider.
<Steps>
<Step title="Open Settings">
Click on your avatar in the top right corner and select **Settings**.
<Frame>
![Settings](/images/difyworkflow101-lesson03-llmnode-1.png)
</Frame>
</Step>
<Step title="Install OpenAI Provider">
On the left menu, click **Model Provider**. Find OpenAI, and click **Install**.
<Frame>
![Choose OpenAI](/images/difyworkflow101-lesson03-llmnode-2.png)
</Frame>
<Frame>
![Install OpenAI](/images/difyworkflow101-lesson03-install.png)
</Frame>
</Step>
<Step title="Return to the Canvas">
Once installed, you are ready to go! Click **ESC** (or the **X**) in the upper right corner to return to your canvas.
</Step>
</Steps>
## Understand the Tags
A pastry chef is great at cakes but terrible at sushi. Similarly, different AI models have different strengths.
When selecting a model in Dify, you will see tags next to their names. Here's how to read them so you can pick the right one for you.
<AccordionGroup>
<Accordion title="CHAT (The Conversationalist)">
This is the bread and butter of AI. It's best for:
- Dialogue
- Writing articles
- Summarizing text
- Answering questions
</Accordion>
<Accordion title="128K (The Great Memory)">
This number represents the **Context Window**. You can think of it as short-term memory.
Here, K stands for thousand. **128K** means the model can hold 128,000 tokens (roughly equals to a word or a syllable). The bigger the number is, the better its memory is.
<Info>
If you need to analyze a massive PDF report or a whole book, you need a model with a big number here.
</Info>
</Accordion>
<Accordion title="Multi-modal (The Evolved Senses)">
Modal just means **Type of Information**. Most early AI models could only read text. Multi-modal models are evolved—they have senses like eyes and ears.
**VISION (The Eyes)**
Models with this tag can do more than read; they can see! You can upload a photo of a sunset and ask, What colors are in this? or upload a picture of your fridge ingredients and ask, What can I cook with this?
**AUDIO (The Ears)**
Models with this tag can hear. You can upload an audio recording of a meeting or a lecture, and the model can transcribe it into text or write a summary for you.
**VIDEO (The Movie Analyst)**
These models can watch and understand video content. They can analyze what is happening in a video clip, just like a human watching a movie.
**DOCUMENT (The Reader)**
These models are expert readers. Instead of copying and pasting text, you can just upload a file (like a PDF or Word document). The model will read the file directly and answer questions based on what is written inside.
</Accordion>
</AccordionGroup>
For our Email Assistant, the LLM with the **CHAT** tag is exactly what we need.
## Hands-On 1: Add the LLM Node
Let's put the brain into our workflow.
<Steps>
<Step title="Open your App">
Go back to the **AI Email Assistant** workflow we created in Lesson 2.
</Step>
<Step title="Add the Node">
Right-click in the empty space between Start and Output node. Click on the new **LLM** block. In the right-side panel, look for **Model**. Select **gpt-4o-mini**.
<Frame>
![Add the Node](/images/difyworkflow101-lesson03-addthenode.png)
</Frame>
</Step>
<Step title="Connect the Nodes">
Drag a line from the Start node to the LLM node. Drag a line from the LLM node to the Output node. Your flow should look like this: **Start → LLM → Output**.
<Frame>
![Connect the Nodes](/images/difyworkflow101-lesson03-connectthenodes.png)
</Frame>
</Step>
</Steps>
Now we need to tell LLM exactly what to do by sending instructions which is called a **Prompt**.
<Frame>
![Add Prompt](/images/difyworkflow101-lesson03-prompt.png)
</Frame>
### Key Concept: The Prompt (The Instructions)
**What is a Prompt?** Think of the Prompt as the specific note you attach to the order ticket. It tells the AI exactly **what to do** and **how to do it**.
The most critical part is the ability to use **Variables** from the Start Node directly within your Prompt. This allows the AI to adapt its output based on the different raw materials you provide each time.
In Dify, when you insert a variable like `customer_name` into the prompt, you are telling the AI: Go and look in the box labeled Customer Name and use the text inside.
## Hands-On 2: Write the Prompt
Now, let's apply this. We are going to write a prompt that mixes instructions with our variables.
<Steps>
<Step title="Draft the Instructions">
Click the LLM Node to open the panel and find the **system** box. **System instructions** set the rules for how the model should respond—its role, tone, and behavioral guidelines.
Let's start by writing out the instructions. You can copy and paste the text below.
```plaintext wrap
You are a professional customer service manager. Based on the customer's email, please draft a professional reply.
Requirements:
1. Start by addressing the customer name with a friendly tone.
2. Thank them for their email.
3. Let them know we have received it.
4. Sign off as Anne.
```
</Step>
<Step title="Add User Messages">
User messages are what you send to the model—a question, request, or task for the model to work on.
In this workflow, the customer's name and the email content change every single time. Instead of typing them out manually, we add Variables in user messages.
1. Click **Add Message** button below system box.
2. In the User Message box, type **customer name:**.
3. Press `/` on your keyboard.
4. The Variable Selection menu pops out, and click `customer_name`.
5. Press Enter to start a new line, and type **email content:.** Then, Press the / key again and click on `email_content`.
<Frame>
![Add User Message](/images/difyworkflow101-lesson03-addusermessage.png)
</Frame>
<Tip>
You don't need to type out those curly brackets manually! Just hit `/`, then pick your variable from the menu.
</Tip>
4. Finally, your final Prompt will look like this:
<Frame>
![Final Prompt](/images/difyworkflow101-lesson03-finalprompt.png)
</Frame>
</Step>
</Steps>
<Check>
**Hooray!** You've finished your first AI workflow in Dify!
</Check>
## Run and Test
The ingredients are prepared, the chef is stand-by, and the instructions are ready. But does the dish taste good? Before we serve it to the customer, let's do a recipe testing.
Testing is the secret sauce to a stable workflow. It helps us catch those sneaky little issues before they are put into work.
### Quick Concept: The Checklist
Think of the **Checklist** as your workflow's personal Health Check Doctor.
It monitors your work in real-time, automatically spotting incomplete settings or mistakes (like a node that isn't connected to anything).
Glancing at the Checklist before you hit **Publish** button is the best way to catch unnecessary errors early.
### Hands-On 3: Test & Debug
<Steps>
<Step title="The Pre-flight Check">
Look at the top right corner of your canvas. Do you see the **Checklist** icon with a little number **1** on it? This is Dify telling you: Wait a second! There's one small thing missing here_._
<Frame>
![Checklist](/images/difyworkflow101-lesson03-checklist.png)
</Frame>
</Step>
<Step title="Analyze the Warning">
Click on it, and you will see a warning: **output variable is required**. It means that the output node receives nothing.
Imagine your Head Chef (the LLM) has finished cooking the food, but the Waiter (the Output Node) has empty hands.
</Step>
<Step title="Fix the Issue">
1. Click on the **Output Node**
2. Look for **Output Variable** and click the **Plus (+)** icon next to it
3. Type `email_reply` in the **Variable Name** field
4. Select the value: Click the variable selector and choose `{x} text` from the LLM Node
<Frame>
![Fix the Issue](/images/difyworkflow101-lesson03-fixtheissue.png)
</Frame>
</Step>
<Step title="Make a Test Run">
Now, there's no pop-up number on checklist. Let's do a test run.
Click **Test Run** at the top right corner of the canvas. Enter the customer's name and the email, then click **Start Run**.
<Frame>
![Test Run](/images/difyworkflow101-lesson03-testrun.png)
</Frame>
<CodeGroup>
```text Sample Email for Testing
Customer Name: Amanda
Original Email:
Hi there,
I'm writing to ask for more information about Dify. Could you please tell me more on it?
Best regards,
Amanda
```
</CodeGroup>
</Step>
<Step title="Success!">
This time, you will see green checkmarks ✅ on each of the nodes and the generated reply from AI.
</Step>
</Steps>
<Check>
**Great job!**
You didn't just build a workflow, but also know how to use the checklist and check before it goes live.
</Check>
## Mini Challenge
Use the same structure to build a travel planner.
<Tip>
Explore the **Prompt Generator** to help you craft better prompts!
<Frame>
![Prompt Generator](/images/difyworkflow101-lesson03-promptgenerator.png)
</Frame>
</Tip>

View File

@@ -0,0 +1,207 @@
---
title: "Lesson 4: The Cheat Sheet (Knowledge Retrieval)"
sidebarTitle: "Lesson 4: The Cheat Sheet"
---
In the previous lessons, our AI email assistant can draft basic emails. But what if a customer asks about specific pricing plans or refund policy, the AI might start Hallucinating—which is a fancy way of saying it's confidently making things up.
How do we stop the AI from hallucination? We give it a Cheat Sheet.
## What is Retrieval Augmented Generation (RAG)
The technical name for this is RAG (Retrieval-Augmented Generation). Think of it as turning the AI from a chef who memorizes general recipes into a chef who has a Specific Cookbook right on the counter.
It happens in three simple steps:
**1. Retrieval (Find the Recipe)**
When a user asks a question, the AI flips through your Cookbook (the files you uploaded) to find the most relevant pages.
Example: Someone asks for Grandma's Special Apple Pie. You go find that specific recipe page.
**2. Augmentation (Prepare the Ingredients)**
The AI takes that specific recipe and puts it right in front of its eyes so it doesn't have to rely on memory.
Example: You lay the recipe on the counter and get the exact apples and cinnamon ready.
**3. Generation (The Baking)**
The AI writes the answer based only on the facts it just found.
Example: You bake the pie exactly as the recipe says, ensuring it tastes like Grandma's, not a generic store-bought version.
## The Knowledge Retrieval Node
Think of this as placing a stack of reference materials right next to your AI Assistant. When a user asks a question, the AI first flips through this Cheat Sheet to find the most relevant pages. Then, it combines those findings with the user's original question to think of the best answer.
In this practice, we will use the Knowledge Retrieval node to provide our AI Assistant with official Cheat Sheets, ensuring its answers are always backed by facts!
### Hands-On 1: Create the Knowledge Base
<Steps>
<Step title="Enter the Library">
Click **Knowledge** in the top navigation bar and click **Create Knowledge**.
<Frame>
![Create Knowledge](/images/difyworkflow101-lesson04-createknowledge.png)
</Frame>
In Dify, you can sync from Notion or a website, but for today, let's upload a file from your device. Click [here](https://drive.google.com/file/d/1imExB0-rtwASbmKjg3zdu-FAqSSI7-7K/view) to download Dify Intro for the upload later.
</Step>
<Step title="Upload the File">
Click **Import from file**. Then, select the file we just downloaded for upload.
<Frame>
![Import From File](/images/difyworkflow101-lesson04-importfromfile.png)
</Frame>
</Step>
<Step title="The 'Chopping' Step (Text Segmentation)">
High-relevance chunks are crucial for AI applications to provide precise and comprehensive responses. Imagine a long book. It's hard to find one sentence in 500 pages. Dify chops the book into different Knowledge Cards so it can find the right answer faster.
**Chunk Structure**
Here, Dify automatically splits your long text into smaller, easier-to-retrieve chunks. We'll just stick with the General Mode here.
<Frame>
![Chunk Structure](/images/difyworkflow101-lesson04-chunkstructure.png)
</Frame>
**Index Method**
- **High Quality**: Use LLM model to process documents for more precise retrieval helps LLM generate high-quality answers
- **Economical**: Using 10 keywords per chunk for retrieval, no tokens are consumed at the expense of reduced retrieval accuracy
<Frame>
![Index Method](/images/difyworkflow101-lesson04-indexmethod.png)
</Frame>
</Step>
<Step title="Retrieval Settings">
After the document has been processed, we need to do one final check on the retrieval settings. Here, you can configure how Dify looks up the information.
In Economical mode, only the inverted index approach is available.
<Frame>
![Retrieval Setting](/images/difyworkflow101-lesson04-retrievalsetting.png)
</Frame>
- **Inverted Index**
This is the default structure Dify uses. Think of it like the Index at the back of a book—it lists key terms and tells Dify exactly which pages they appear on.
This allows Dify to instantly jump to the right knowledge card based on keywords, rather than reading the whole book from start.
- **Top K**
You'll see a slider set to 3. This tells Dify: When the user asks a question, find the top 3 most relevant Knowledge Cards from the cookbook to show the AI.
If you set it higher, the AI gets more context to read, but if it's too high, it might get overwhelmed with too much information.
For now, let's just keep the default settings—they are already perfectly suited for our needs.
<Frame>
![Document Processing](/images/difyworkflow101-lesson04-documentprocessing.png)
</Frame>
</Step>
<Step title="Save and Process">
Click **Save and Process**. Your knowledge base is ready!
</Step>
</Steps>
<Check>
**Awesome!**
You have successfully created your first Knowledge Base. Next, we'll use this Knowledge Base to upgrade our AI Email Assistant.
</Check>
### Hands-On 2: Add the Knowledge Retrieval Node
<Steps>
<Step title="Add the Node">
1. Go back to your Email Assistant Workflow.
2. Hover over the line between the Start and LLM nodes.
3. Click the **Plus (+)** icon and select the **Knowledge Retrieval** node.
<Frame>
![Add Knowledge Retrieval Node](/images/difyworkflow101-lesson04-addknowledgetrievalnode.png)
</Frame>
</Step>
<Step title="Connect Knowledge Base">
1. Click the node, and head to the right panel.
2. Click the **plus (+)** button next to **Knowledge** to add knowledge.
<Frame>
![Add Knowledge](/images/difyworkflow101-lesson04-addknowledge.png)
</Frame>
3. Choose **What's Dify**, and click **Add**.
<Frame>
![Select Knowledge](/images/difyworkflow101-lesson04-selectknowledge.png)
</Frame>
</Step>
<Step title="Configure Query Text">
Now the knowledge base is ready, how can we make sure that AI is looking through the knowledge base to search the answer with the email?
Stay at the panel, navigate to **Query text** above, and select `email_content`.
By doing this, we are telling AI: Take the customer's message and use it as a search keyword to flip through our cookbook and find the matching info. Without a query, the AI is just staring at a closed book.
<Frame>
![Query Text](/images/difyworkflow101-lesson04-querytext.png)
</Frame>
</Step>
</Steps>
In this way, the Email Assistant will use the customer's original email as a search keyword to find the most relevant answers in the Knowledge Base.
### Hands-On 3: Upgrade the Email Assistant
Now, the knowledge base is ready. We need to tell the LLM node to actually read the knowledge as context before generating the reply.
<Steps>
<Step title="Add Context">
1. Click the **LLM Node**. You'll see a new section called **Context**.
2. Click it and select **result** from the Knowledge Retrieval node.
<Frame>
![Add Context](/images/difyworkflow101-lesson04-addcontext.png)
</Frame>
</Step>
<Step title="Update the Prompt">
We need to tell the AI to generate reply based on the context.
In **System**, add additional requirement **Generate response based on** `/` and select **Context**.
<Frame>
![Update Prompt](/images/difyworkflow101-lesson04-updateprompt.png)
</Frame>
</Step>
</Steps>
**Whoo!** You've just completed the most challenging part. Now, your email assistant has a knowledge base to check when generating responses. Let's see how it works.
Feel free to use the sample texts below to do the testing.
<CodeGroup>
```text Sample Email for Testing
Customer Name: Amanda
Original Email:
Hi,
What does the name 'Dify' actually stand for, and what can it do for my business?
Best regards,
Amanda
```
</CodeGroup>
Check on the result and you'll find that instead of a generic guess, the AI will look at the knowledge base and explain what Dify stands for.
<Frame>
![Test Result](/images/difyworkflow101-lesson04-testresult.png)
</Frame>
## Mini Challenge
1. What happens if a customer asks a question that isn't in the knowledge base?
2. What kind of information could you upload as a knowledge base?
3. Explore Chunk Structure, Index Method, and Retrieval Setting.

View File

@@ -0,0 +1,170 @@
---
title: "Lesson 5: The Crossroads of Your Workflow (Sorting and Executing)"
sidebarTitle: "Lesson 5: The Crossroads of Your Workflow"
---
Right now, our Email Assistant treats every message following the same path of the workflow. That's not smart enough. An email asking about Dify's price should be handled differently than an email on bug reporting.
To make our assistant truly intelligent, we need to teach it how to Read the Room. We're going to set up a Crossroads that sends different types of emails down different tracks.
## The If/Else Node
<Frame>
![If/Else Node](/images/difyworkflow101-lesson05-ifelsenode.png)
</Frame>
If/Else node is just like a traffic light. It checks a condition (like Does this email mention pricing? ) and sends the flow left or right based on the result.
### Hands-On 1: Set up the Crossroads
Let's upgrade our assistant so it can tell the difference between Dify-related emails and Everything else.
<Steps>
<Step title="Insert the Node">
Hover over the line between the Start and Knowledge Retrieval nodes. Click the **Plus (+)** icon and select the **If/Else** node.
</Step>
<Step title="Set the Rules">
1. Click the node to open the panel
2. Click **\+ Add Condition** in the IF section. Choose the variable: `{x} email_content`
<Frame>
![Add Condition](/images/difyworkflow101-lesson05-settings1.png)
</Frame>
3. The Logic: Keep it as **Contains**. Type **Dify** in the input box
<Frame>
![Contains](/images/difyworkflow101-lesson05-settings2.png)
</Frame>
Now, the complete logic for the IF branch is: `If the email content contains the word Dify`.
</Step>
</Steps>
<Info>
**Understanding the Traffic Light**
When setting conditions, Dify offers several ways to judge information, much like the different signals at a crossroads:
- **Is / Is Not**
Like a perfect key for a lock. The content must match your value exactly.
- **Contains / Not Contains**
Like a magnifying glass. It checks if a specific keyword exists anywhere in the text. This is what we are using today.
- **Starts with / Ends with**
Check if the text begins or ends with specific characters.
- **Is Empty / Is Not Empty**
Check if the variable has any content. For example: Checking if a user actually uploaded an attachment. Understanding these helps you set accurate and flexible rules, building a much smarter workflow!
</Info>
### Hands-On 2: Plan Different Paths
Now that we have the crossroad here, we need to decide what happens on each road.
#### A. The Dify-Related Email Track (IF Branch)
Click the **plus (+)** icon on the right side of the IF branch, drag out a line, and connect it to **Knowledge Retrieval** node.
What this means: When the email contains the word Dify, the flow will execute the professional reply process we built in the last lesson (which looks up information in the Knowledge Base).
<Frame>
![Connect IF Branch](/images/difyworkflow101-lesson05-connectifbranch.png)
</Frame>
#### B. The Unrelated Email Track (ELSE Branch)
For emails that are not related or mention Dify, we want to create a simple, polite, and general reply process.
<Steps>
<Step title="Create a new Node">
Click the **(+)** next to ELSE and select a new **LLM Node (LLM 2)**
</Step>
<Step title="Add Prompt to this LLM node">
Copy and paste the prompt below
```plaintext wrap
You are a professional customer service manager. Based on the customer's email, kindly inform the user that no relevant information was found and provide relevant guidance.
Requirements:
1. Address the customer name in a friendly tone.
2. Thank them for their letter.
3. Keep the tone professional and friendly.
4. Sign off as "Anne."
```
</Step>
<Step title="Add User Message">
1. Click **Add Message** button below system.
2. In the User Message box, type **customer name:**.
3. Press `/` on your keyboard.
4. You can see the Variable Selection menu pops out, and click `customer_name`.
5. Press Enter to start a new line, and type **email content:**
6. Press the / key again and click on `email_content`.
<Frame>
![Prompt for LLM 2](/images/difyworkflow101-lesson05-finalpromptforllm2.png)
</Frame>
</Step>
</Steps>
Now we have two tracks generating two different replies. Imagine if we had 10 tracks, our workflow would look like a messy plate of spaghetti.
To keep things clean, we use a Variable Aggregator. Think of it as a Traffic Hub where all the different roads merge back into one main highway.
## Variable Aggregator
<Frame>
![Variable Aggregator](/images/difyworkflow101-lesson05-variableaggregator.png)
</Frame>
Variable Aggregator is like a traffic hub where all the different roads merge back into one main highway.
### Hands-On 3: Add Variable Aggregator
<Steps>
<Step title="Add the Aggregator">
1. Select the connection line between the End Node and the LLM node and delete it.
2. Right-click on the canvas, select **Add Node**, and choose the **Variable Aggregator** node.
<Frame>
![Add Variable Aggregator](/images/difyworkflow101-lesson05-addvariableaggregator.png)
</Frame>
</Step>
<Step title="Merge the Paths">
Connect LLM and LLM 2 node to the Variable Aggregator.
</Step>
<Step title="Assign the Output">
1. Click the Variable Aggregator node.
2. Click the **plus (+)** icon next to **Assign Variables**.
3. Select the **text** from LLM 1 AND the **text** from LLM 2.
<Frame>
![Assign Variable](/images/difyworkflow101-lesson05-assignvariable.png)
</Frame>
Now, no matter which LLM node generates the response, the Variable Aggregator node gathers the content and hands it to the Output Node.
</Step>
<Step title="The Final Step">
1. Connect the Variable Aggregator to the Output node.
2. Update the Output Variable to the Variable Aggregator's result instead of previous LLM results.
<Frame>
![Update Output Variable](/images/difyworkflow101-lesson05-updateoutputvariable.png)
</Frame>
Here's how the workflow looks:
<Frame>
![Final Workflow](/images/difyworkflow101-lesson05-finalworkflow.png)
</Frame>
</Step>
<Step title="Test and Run">
Click **Test Run**, enter a customer name, and try testing with inputs that both include and exclude the keyword Dify to see the different results.
</Step>
</Steps>
## Mini Challenge
For business inquiry emails, how should we edit this workflow to generate proper response?
<Tip>
Don't forget to update knowledge base with business-related files.
</Tip>

View File

@@ -0,0 +1,206 @@
---
title: "Lesson 6: Handle Multiple Tasks (Parameter Extraction & Iteration)"
sidebarTitle: "Lesson 6: Handle Multiple Tasks"
---
Imagine you get an email saying:
> Hi! What exactly is Dify? Also, which models does it support? And do you have a free plan?
If we send this to our current AI assistant, it might only answer the first question or give a vague response to both.
We need a way to identify every question first, and then loop through our Knowledge Base to answer them one by one.
## Parameter Extractor
<Frame>
![Parameter Extractor](/images/difyworkflow101-lesson06-parameterextractor.png)
</Frame>
You can take Parameter Extractor as a highly organized scout. It reads a paragraph of texts (like an email) and picks out the specific piece of information you asked for, putting them into a neat and organized list.
### Hands-On 1: Add Parameter Extractor
Before we upgrade the email assistant, let's remove these nodes: Knowledge Retrieval, If/Else, LLM, LLM 2, and Variable Aggregator.
<Steps>
<Step title="Add the Node">
Right after the Start node, add the **Parameter Extractor** node.
<Frame>
![Add Parameter Extractor](/images/difyworkflow101-lesson06-addparameterextractor.png)
</Frame>
</Step>
<Step title="Set the Input">
Click Parameter Extractor, and in the **Input Variable** section on the right panel, choose `email_content`.
<Frame>
![Set the Input](/images/difyworkflow101-lesson06-settheinput.png)
</Frame>
Since AI doesn't automatically know which specific information we need from the email, we must tell it to collect all the questions.
</Step>
<Step title="Add Extract Parameter">
Click the **plus (+)** icon next to **Extract Parameters** to start defining what the AI should look for. Let's call it `question_list`.
<Frame>
![Add Extract Parameter](/images/difyworkflow101-lesson06-addextractparameter.png)
</Frame>
<Info>
**Parameter Types**
If Parameter Extractor is a scout, then Type is the bucket they use to carry the info. You need the right bucket for the right information.
**Single Items (The Small Buckets)**
- **String (Text)**: For a single piece of text, e.g. customer's name
- **Number**: For a single digit, e.g. order quantity
- **Boolean**: A simple Yes or No (True/False), good for a judgement result or a decision
**List Items (The Arrays)**
- **Array[String]**: Array means List, and String means Text. So, `Array[String]` means we are using a basket that can hold multiple pieces of text—like all the separate questions in an email
- **Array[Number]**: A container that holds different numbers, e.g. a list of prices or commodities
- **Array[Boolean]**: Used to store multiple Yes/No judgment results. For example, checking a list containing multiple to-do items and returning whether each item is completed, such as `[Yes, No, Yes]`
- **Array[Object]**: An advanced folder that holds sets of data (like a contact list where each entry has a Name and a Phone Number)
</Info>
</Step>
<Step title="Finish Adding Extract Parameter">
1. Based on our needs, choose `Array[String]` for the email content.
2. Add description for providing additional context. You can write: All the questions raised by the user in the email. After that, click **Add**.
<Frame>
![Finish Adding Extract Parameter](/images/difyworkflow101-lesson06-finishaddextractparameter.png)
</Frame>
</Step>
<Step title="Add Instructions">
In the **Instructions** box below the extracted parameters, type a clear command to tell the AI how to act.
For example: Extract all questions from the email, and make each question as a single item on the list.
</Step>
</Steps>
By doing this, the node will be able to find all the questions in the email. Now that our scout has successfully gathered the Golden Nuggets, we need to move to the next step: teaching the AI to process each question.
## Iteration
<Frame>
![Iteration](/images/difyworkflow101-lesson06-iteration.png)
</Frame>
With iteration, your assistant has a team of identical twins. When you hand over a list (like questions in the mail list), a twin appears for every single item on that list.
Each twin takes their assigned item and performs the exact same task you've set up, ensuring nothing gets missed.
### Hands-On 2: Set up Iteration Node
<Steps>
<Step title="Add the Node">
1. Add an Iteration node after the Parameter Extractor.
2. Click on the Iteration node and navigate to the Input panel on the right.
3. Select `{x} question_list` from the Parameter Extractor. Leave the output variable blank for now.
<Frame>
![Add Iteration Node](/images/difyworkflow101-lesson06-additeration.png)
</Frame>
**Advanced Options in Iteration**
In the Iteration panel, you'll see more settings. Let's have a quick walk-through.
<Frame>
![Advanced Options in Iteration](/images/difyworkflow101-lesson06-advancediterationoptions.png)
</Frame>
**Parallel Mode**: OFF (Default)
- When disabled, the workflow processes each item in the list one after another (finish Question 1, then move to Question 2).
- When enabled, the workflow attempts to process all items in the list simultaneously (similar to 5 chefs cooking 5 different dishes at the same time).
**Error Response Method**: Terminate on error by default.
- **Terminate**: This means if any single item in the list (e.g., the 2nd question) fails during the sub-process, the entire workflow will stop immediately
- **Ignore Error and Continue**: This means even if the 2nd question fails, the workflow will skip it and move on to process the remaining questions
- **Remove Abnormal Output**: Similar to ignore, but it also removes that specific failed item from the final output list results
Back to the workflow, you'll see a sub-process area under the Iteration node. Every node inside this box will run once for every question.
</Step>
<Step title="Add Knowledge Retrieval Node">
1. Inside the Iteration box, add a Knowledge Retrieval node.
2. Set the query text to `{x} item`. In Iteration, item always refers to the question that is currently being processed.
<Frame>
![Add Knowledge Retrieval Node and Set Query Text](/images/difyworkflow101-lesson06-knowledgeretrievalquerytext.png)
</Frame>
</Step>
<Step title="Add LLM Node">
1. Add an LLM node after Knowledge Retrieval.
2. Configure it to answer the question based on the retrieved context.
<Tip>
Remember Lesson 4? Use those Prompt skills and don't forget context!
</Tip>
Feel free to use the prompt below:
**System**:
```plaintext wrap
You are a professional Dify Customer Service Manager. Please provide a response to questions strictly based on the `Context`.
```
**User**:
```plaintext wrap
questions: Iteration/{x} item
```
<Frame>
![Add LLM Prompt](/images/difyworkflow101-lesson06-addllmandprompt.png)
</Frame>
Since the iteration node generates an answer for each individual question, we need to gather all these answers to create one complete reply email.
</Step>
<Step title="Set Iteration Output">
1. Click the Iteration node.
2. In the **Output Variable**, select the variable representing the LLM's answer inside the loop. Now, the Iteration node will collect every answer and gather them into a new list.
<Frame>
![Set Iteration Output](/images/difyworkflow101-lesson06-setiterationoutput.png)
</Frame>
</Step>
<Step title="Add Final LLM Node">
Finally, connect one last LLM node. This final editor will take all the collected answers and polish them into one professional email.
Don't forget to add prompt in system and user message. Feel free to refer the prompt below.
```plaintext wrap
You are a professional customer service assistant. Please organize the answers prepared for customer into a clear and complete email reply.
Sign the email as Anne.
```
**User**:
```plaintext wrap
answers: Iteration/{x}output
customer: User Input/{x}customer_name
```
<Frame>
![Add Final LLM Node](/images/difyworkflow101-lesson06-addfinalllmnode.png)
</Frame>
</Step>
<Step title="Final Check">
1. Click on the checklist to see if there's any missing spot. According to the notes, we need to connect Output node while fix the invalid variable issue.
2. Connect Output node with LLM 2 node before it, remove its previous variable, then select text in LLM 2 as the output variable.
<Frame>
![Select Output Variable](/images/difyworkflow101-lesson06-outputvariable.png)
</Frame>
</Step>
</Steps>
Now, you can write a test email with 3 different questions and check on the generated reply.
## Mini Challenge
What else could Parameter Extractor find?
<Tip>
Try exploring the parameter type in it.
</Tip>

View File

@@ -0,0 +1,226 @@
---
title: "Lesson 7: Enhance Workflows (Plugins)"
sidebarTitle: "Lesson 7: Enhance Workflows"
---
Our email assistant can now flip through our knowledge base. But what if a customer asks a beyond-knowledge-base question, like: What is in the latest Dify release?
If the knowledge base hasn't been updated yet, the workflow will be at a loss. To fix this, we need to equip it with a Live Search skill!
## Tools
<Frame>
![Tools](/images/difyworkflow101-lesson07-tools.png)
</Frame>
Tools are the superpower for your AI workflow.
The [Dify Marketplace](https://marketplace.dify.ai/) is like a Plugin Supermarket. It's filled with ready-made superpowers—searching Google, checking the weather, drawing images, or calculating complex math. You just install and plug them into your workflow with several clicks.
Now, let's continue to upgrade on current workflow.
### Hands-On 1: Upgrade the Sub-process Area in Iteration
We are going to add a new logic to our assistant: Check the Knowledge Base first; if the answer isn't there, go search Google.
To focus on the new logic, let's keep only these nodes: **User input, Parameter Extractor, and Iteration**.
#### Step 1: Knowledge Query and the Judge
<Steps>
<Step title="Enter the Iteration">
1. Click to enter the sub-process area of the Iteration node.
2. Keep Knowledge Retrieval node, and make sure the query variable is `{x} item`.
3. Delete the previous LLM node.
</Step>
<Step title="Add the Judge (LLM Node)">
Add an LLM node right after Knowledge Retrieval node. Its job is to decide if the Knowledge Base info can actually respond to the questions.
- **For Context session**: Select the `Knowledge Retrieval / {x} result Array [Object]` from Knowledge Retrieval
- **System Prompt**:
```plaintext wrap
Based on the `Context`, determine if the answer contains enough information to answer the questions. If the information is insufficient, you MUST reply with: "Information not found in knowledge base".
```
- **User Message**:
```plaintext wrap
questions: Iteration/{x} item
```
<Frame>
![LLM Settings](/images/difyworkflow101-lesson07-llmsettings.png)
</Frame>
</Step>
</Steps>
Here's what it looks like on the canvas.
<Frame>
![Workflow Preview](/images/difyworkflow101-lesson07-workflowpreview1.png)
</Frame>
#### Step 2: Setting the Crossroads
<Steps>
<Step title="Add If/Else Node">
After LLM node, let's add If/Else node. Set the rule: If LLM Output **Contains** **Information not found in knowledge base**.
This means, when we can't respond with the information in knowledge base.
<Frame>
![Add If/Else Node](/images/difyworkflow101-lesson07-addifelsenode.png)
</Frame>
</Step>
<Step title="Add Tool for Searching">
Let's connect a search tool after the IF branch. This indicates that when the knowledge base cannot find relevant answer information, we use web search to find the answers.
1. After the IF node, click plus(+) icon and select Tool.
2. In the search box, type Google. Hover over Google, click Install on the right, and then click Install again in the pop-up window.
<Frame>
![Install Plugin](/images/difyworkflow101-lesson07-addtools.png)
</Frame>
</Step>
<Step title="Install Google Search">
Click Google Search in Google.
<Frame>
![Install Google Search](/images/difyworkflow101-lesson07-installgooglesearch.png)
</Frame>
</Step>
<Step title="Get Your API Key">
Using Google Search for the first time requires authorization—it's like needing a Wi-Fi password.
<Frame>
![Google Search Setup](/images/difyworkflow101-lesson07-googlesearchsetup.png)
</Frame>
1. Click API Key Authorization Configuration, then click Get your SerpApi API key from SerpApi. Sign in to get your private API key.
<Note>
Your API Key is your passport to the outside world. Keep it safe and avoid sharing it with others.
</Note>
<Frame>
![API Key Authorization Configuration](/images/difyworkflow101-lesson07-apikeyauthorizationconfiguration.png)
</Frame>
2. Copy and paste the API key in SerpApi API key. Click **Save**.
3. Once the API key is successfully authorized, the settings panel shows up immediately. Head to Query string field, and select `Iteration/{x} item`.
<Frame>
![Add Query String](/images/difyworkflow101-lesson07-addquerystring.png)
</Frame>
</Step>
<Step title="Configure the Two Paths">
Now, we need different ways to answer depending on which path we're looking at.
**The Search Answer Path**
Add a new LLM node to answer the question based on the search results. Connect it to the Google Search node.
**System**:
```plaintext wrap
You are a Web Research Specialist. Based on Google Search, concisely answer the user's questions. Please do not mention the knowledge base in your response.
```
**User Message**:
```plaintext wrap
results: GOOGLESEARCH/{x} text
questions: Iteration/{x} item
```
<Frame>
![Prompt for LLM 2](/images/difyworkflow101-lesson07-llm2prompt.png)
</Frame>
**The Knowledge Searching Path**
After the Else node, add a new LLM node to handle answers based on the knowledge base.
**System**:
```plaintext wrap
You are a professional Dify Customer Service Manager. Strictly follow the `Context` to reply to questions.
```
**User Message**:
```plaintext wrap
questions: Iteration/{x} item
```
<Frame>
![Prompt for LLM 3](/images/difyworkflow101-lesson07-promptllm3.png)
</Frame>
</Step>
<Step title="Combine the Information">
1. In the sub-process (inside the Iteration box), add a Variable Aggregator node that connects both LLM 2 and LLM 3 at the very end.
2. In the Variable Aggregator panel, select the variables `LLM 2/{x}text String` and `LLM 3/{x}text String` as the Assign Variables.
In this way, we're merging the two possible answers into a single path.
<Frame>
![Variable Aggregator Setup](/images/difyworkflow101-lesson07-variableaggregatorsetup.png)
</Frame>
</Step>
</Steps>
This is how the current workflow looks.
<Frame>
![Workflow Preview 2](/images/difyworkflow101-lesson07-workflowpreview2.png)
</Frame>
#### Step 3: The Final Email Assembly
Now that our logic branches have finished processing, let's combine all the answers into a single, polished email.
<Steps>
<Step title="Configure Iteration Output">
Click on the Iteration node, and set `{x}Variable Aggregator/{x}output String` as the output variables.
<Frame>
![Iteration Output](/images/difyworkflow101-lesson07-iterationoutput.png)
</Frame>
</Step>
<Step title="Connect the Summary LLM">
After the Iteration node, connect a new LLM node to summarize all outputs. Feel free to use the prompt below.
**System**:
```plaintext wrap
You are a professional Customer Service Manager. Summarize all the answers of the questions, and organize a clear and complete email reply for the customer.
Do not include content where the knowledge base could not find relevant information.
Signature: Anne.
```
**User Message**:
```plaintext wrap
questions: Iteration/ {x} output
customer: User Input / {x} customer_name
```
<Frame>
![Prompt for LLM 4](/images/difyworkflow101-lesson07-promptforllm4.png)
</Frame>
</Step>
<Step title="Finalize with the Output Node">
After the LLM node, add an End node. Set the output variable as `LLM 4/{x}text String`.
<Frame>
![Output Setup](/images/difyworkflow101-lesson07-outputsetup.png)
</Frame>
</Step>
</Steps>
We have now completed the entire setup and configuration of the workflow. Our email assistant can now answer questions based on the Knowledge Base and use Google Search for supplementary answers when needed.
<Frame>
![Final Workflow Preview](/images/difyworkflow101-lesson07-finalworkflowpreview.png)
</Frame>
Try sending an email with question that definitely isn't in the knowledge base. Let's see if the AI successfully uses Google to find the related answers.
## Mini Challenge
1. What are other conditions you can choose in If/Else node?
2. Browse marketplace to see if you can add another tool for this workflow?

View File

@@ -0,0 +1,213 @@
---
title: "Lesson 8: The Agent Node"
sidebarTitle: "Lesson 8: The Agent Node"
---
Let's look back the upgrades we've made for our email assistants.
- Learned to Read: It can search a Knowledge Base
- Learned to Choose: It uses Conditions to make decisions
- Learned to Multitask: It handles multiple questions via Iteration
- Learned to Use Tools: It can access the Internet via Google Search
You might have noticed that our workflow is no longer just a straight line (Step 1 → Step 2 → Step 3).
It's becoming a system that analyzes, judges, and calls upon different abilities to solve problems. This advanced pattern is what we call an Agentic Workflow.
## Agentic Workflow
An Agentic Workflow isn't just Input \> Process \> Output.
It involves thinking, planning, using tools, and adjusting based on results. It transforms the AI from a simple Executor (who just follows orders) into an intelligent Agent (who solves problems autonomously).
## Agent Strategies
To make Agents work smarter, researchers designed Strategies—think of these as different modes of thinking that guide the Agent.
- **ReAct (Reason + Act)**
The Think, then Do approach. The Agent thinks (What should I do?), acts (calls a tool), observes the result, and then thinks again. It loops until the job is done.
- **Plan-and-Execute**
Make a full plan first, then do it step-by-step.
- **Chain of Thought (CoT)**
Writing out the reasoning steps before giving an answer to improve accuracy.
- **Self-Correction**
Checking its own work and fixing mistakes.
- **Memory**
Equipping the Agent with short-term or long-term memory allows it to recall previous conversations or key details, enabling more coherent and personalized responses.
In Lesson 7, we manually built a Brain using Knowledge Retrieval -\> LLM to Decide-\> If/Else -\> Search. It worked, but it was complicated to build.
Is there a simpler way? Yes, and here it is.
## Agent Node
The Agent Node is a highly packaged intelligent unit.
You just need to set a Goal for it through instructions and provide the Tools it might need. Then, it can autonomously think, plan, select, and call tools internally (using the selected Agent Strategy, such as ReAct, and the model's Function Calling capability) until it completes your set goal.
In Dify, this greatly simplifies the process of building complex Agentic Workflows.
## Hands-on 1: Build with Agent Node
Our goal is to replace that complex manual logic inside our Iteration loop with a single, smart Agent Node.
<Steps>
<Step title="Clean up the Iteration">
Go to the sub-process of the Iteration. Keep knowledge retrieval node, and delete other nodes in side it.
<Frame>
![Iteration](/images/difyworkflow101-lesson08-iteration.png)
</Frame>
</Step>
<Step title="Add the Agent Node">
Add an Agent node right after the Knowledge Retrieval node.
<Frame>
![Add Agent Node](/images/difyworkflow101-lesson08-addagentnode.png)
</Frame>
</Step>
<Step title="Install Agent Strategy">
Since we haven't used this before, we need to install a strategy from the Marketplace.
Click the Agent node. In the right panel, look for Agent Strategy. Click Find more in Marketplace.
<Frame>
![Search Agent Strategy](/images/difyworkflow101-lesson08-searchagentstrategy.png)
</Frame>
</Step>
<Step title="Pick an Agent Strategy">
In the Marketplace, find Dify Agent Strategy and install it.
<Frame>
![Choose Agent Strategy](/images/difyworkflow101-lesson08-chooseagentnode.png)
</Frame>
</Step>
<Step title="Select ReAct">
Back in your workflow (refresh if needed), select ReAct under Agent Strategy.
<Frame>
![Select ReAct](/images/difyworkflow101-lesson08-selectreact.png)
</Frame>
**Why ReAct here?**
ReAct (Reason + Act) is a strategy that mimics human problem-solving using a Think → Do → Check loop.
1. Reason: The Agent thinks, What should I do next? (e.g., Check the Knowledge Base).
2. Act: It performs the action.
3. Observe: It checks the result. If the answer isn't found, it repeats the cycle (e.g., Okay, I need to search Google).
This thinking-while-doing approach is perfect for complex tasks where the next step depends on the previous result.
</Step>
<Step title="Choose a Model">
ReAct is a thinking strategy, but to actually pull off the action part, AI needs the right "physical" skills which is called **Function Calling**. Select a model that supports Function Calling. Here, we choose gpt-5.
**Why Function Calling?**
One of the core capabilities of an Agent Node is to autonomously call tools. Function Calling is the key technology that allows the model to understand when and how to use the tools you provide (like Google Search).
If the model doesn't support this feature, the Agent cannot effectively interact with tools and loses most of its autonomous decision-making capabilities.
<Frame>
![Choose a Model](/images/difyworkflow101-lesson08-chooseamodel.png)
</Frame>
</Step>
<Step title="Add Tool">
Click Agent node. Click plus(+) icon in tool list and select Google Search.
<Frame>
![Add Tool](/images/difyworkflow101-lesson08-addtool.png)
</Frame>
</Step>
<Step title="Add Instructions">
We need to tell the Agent specifically what to do with the tools and context we are giving it. Use and paste the instructions into the Instruction field:
```plaintext wrap
Goal: Answer user questions about Dify products.
Steps:
1. I have provided a relevant internal knowledge base retrieval result. First, judge if this result can fully answer the user's questions.
2. If the context clearly answers it, generate the final answer based on the context.
3. If the answer is insufficient or irrelevant, use the Google Search tool to find the latest information and generate the answer based on search results.
Requirement: Keep the final answer concise and accurate.
```
<Frame>
![Add Instructions](/images/difyworkflow101-lesson08-addinstructions.png)
</Frame>
</Step>
<Step title="Context and Query">
Your configuration here is crucial for the Agent to see the data.
- **Context**: Select `Knowledge Retrieval / (x) result Array[Object]` from the Knowledge Retrieval node (This passes the knowledge base content to the Agent).
- **Query**: Select `Iteration/{x} item` from the Iteration node.
**Why item instead of the original email_content?**
We used the Parameter Extractor to extract a list of questions (`question_list`) from the `email_content`. The Iteration node is processing this list one by one, where item represents the specific question currently being handled.
Using item as the query input allows Agent to focus on the current task, improving the accuracy of decision-making and actions.
<Frame>
![Context and Query](/images/difyworkflow101-lesson08-contextandquery.png)
</Frame>
</Step>
<Step title="Set Iteration Output">
Click `Agent/{x}text String` as the output variables.
<Frame>
![Set Iteration Output](/images/difyworkflow101-lesson08-iterationoutput.png)
</Frame>
</Step>
</Steps>
<Check>
🎉 The Iteration node is now upgraded.
</Check>
Since the Iteration node generates a list of answers, we need to stitch them back together into one email.
## Hands-on 2: Final Assembly
<Steps>
<Step title="The Final Editor (LLM)">
1. Add an LLM node after the Iteration node.
2. Click on it and add prompt into the system. Feel free to check on the prompt below, or edit by yourself.
```plaintext wrap
Combine all answers for the original email.
Write a complete, clear, and friendly reply to the customer.
Signature: Anne
```
3. Add user message to replace answers, email content and customer name with variables respectively. Here's how the LLM looks like right now.
<Frame>
![Final LLM](/images/difyworkflow101-lesson08-finalllm.png)
</Frame>
</Step>
<Step title="Add Output Node">
Set the output variable to the LLM's text and name it `email_reply`.
<Frame>
![Add Output Node](/images/difyworkflow101-lesson08-addoutputnode.png)
</Frame>
</Step>
</Steps>
Here comes the final workflow.
<Frame>
![Final Workflow](/images/difyworkflow101-lesson08-finalworkflow.png)
</Frame>
Click **Test Run**. Ask a mix of questions. Watch how the Agent Node autonomously decides when to use the context and when to use Google search.
## Mini Challenge
1. Could we use an Agent Node to replace the entire Iteration loop? How would you design the prompt to handle a list of questions all at once?
2. What other information could you feed into the Agent's Context field to help it make better decisions?

View File

@@ -0,0 +1,94 @@
---
title: "Lesson 9: Layout Designer (Template)"
sidebarTitle: "Lesson 9: Layout Designer"
---
In Lesson 8, we successfully built a powerful Agent that can think and search. However, you might have noticed a tiny issue: even though we asked the final LLM to list the answers, sometimes the formatting can be a bit messy or inconsistent (e.g., mixing bullet points with paragraphs).
To fix this, we need a dedicated format assistant to organize the answers into a beautiful, standardized format before the final LLM writes the email.
## Template
It takes the original data (like your list of answers), follows a strict design template/standards you provide, and generates a perfectly formatted block of text, ensuring consistency every single time.
## Hands-On: Polish the Email Layout
<Steps>
<Step title="Update the LLM Node">
Since the Template node will be handling the greetings, we need to tell LLM to focus solely on the questions and answers. Copy and paste the prompt below or feel free to edit it.
```plaintext wrap
Combine all answers for the original email. Write a complete, clear, and friendly reply that only includes the summarized answers.
IMPORTANT: Focus SOLELY on the answers. Do NOT include greetings (like "Hi Name"), do
NOT write intro paragraphs (like "Thank you for reaching out"), and do NOT include
signatures.
```
</Step>
<Step title="Add User Message">
List the different variables respectively.
<Frame>
![Edit LLM Node](/images/difyworkflow101-lesson09-editllmnode.png)
</Frame>
</Step>
<Step title="Add Template Node">
After LLM node, click to add Template node.
<Frame>
![Add Template Node](/images/difyworkflow101-lesson09-addtemplatenode.png)
</Frame>
</Step>
<Step title="Set up the Input Variables">
Click the Template node, go to the Input Variables section, and add these two items:
- `customer`: Choose `User Input / {x} customer_name String`
- `body`: Choose `LLM / {x} text String`
<Frame>
![Template Input Variable](/images/difyworkflow101-lesson09-templateinputvariable.png)
</Frame>
</Step>
<Step title="Format with Jinja">
**What is Jinja2?**
In simple terms, Jinja2 is a tool that allows you to format variables (like your list of answers) into a text template exactly how you want. It uses simple symbols to mark where variables go and perform basic logic. With it, we can turn a raw list of data into a neat, standardized text block.
Here, we can put together opening, signatures, and email body to make sure the email is professional and consistent every time.
Copy and paste this exact layout into the Template code box:
```jinja
Hi {{ customer }},
Thank you for reaching out to us, and we are more than happy to provide you with the information you are seeking.
Here are the details regarding your specific questions:
{{ body }}
---
Thank you for reaching out to us!
Best regards,
Anne
```
</Step>
</Steps>
Here's the final workflow.
<Frame>
![Final Workflow](/images/difyworkflow101-lesson09-thefinalworkflow.png)
</Frame>
Click **Test Run**. Ask multiple questions in one email. Notice how your final output has a perfectly written custom intro, the LLM's beautifully summarized answers in the middle, and a standard, professional signature at the bottom.
## Mini Challenge
1. How would you change the Jinja2 code to make a numbered list (1. Answer, 2. Answer) instead of bullet points?
<Tip>
Check the [Template Designer Documentation](https://jinja.palletsprojects.com/en/stable/templates/) or ask an LLM about it.
</Tip>
2. What else can Template node do?

View File

@@ -0,0 +1,106 @@
---
title: "Lesson 10: Publish and Monitor Your AI App"
sidebarTitle: "Lesson 10: Publish and Monitor"
---
After building and tuning, your Email Assistant is now fully ready. It can read knowledge bases, use search tools, and generate beautifully formatted replies. But right now, it's still sitting inside your Dify Studio and only you can see it.
How do we share it with others? How do we know if it's working correctly when we aren't watching?
It's time for the final two critical steps: Publish and Monitor.
## Publish Your Application
1. Move your mouse to the top right corner of the canvas and click the **Publish** button. You'll see other buttons light up.
<Note>
Whenever you make changes to your workflow, you must click **Publish → Update** to save them.
If you don't update, the live version will remain the old one.
</Note>
<Frame>
![Publish](/images/difyworkflow101-lesson10-publish.png)
</Frame>
2. Once published, the gray-out buttons turned clickable now.
1. **Share Your App**
Click **Run App**. Dify automatically generates a WebApp for you. This is a ready-to-use chat interface for your Email Assistant.
You can send this URL to colleagues or friends. They don't need to log in to Dify to use the email assistant.
<Frame>
![WebApp](/images/difyworkflow101-lesson10-webapp.png)
</Frame>
2. **Batch Run App**
If you have 100 emails to reply, copying and pasting them one by one will drag you down.
In Dify, all you need to do is to prepare a CSV file with the 100 emails. Upload it to Dify's Batch Run feature. Dify processes all 100 emails automatically and gives you back a spreadsheet with all the generated replies.
Since we set specific variables (like `email_content`), your CSV must match that format. Dify provides a template you can download to make this easy.
<Frame>
![Download Template](/images/difyworkflow101-lesson10-downloadtemplate.png)
</Frame>
3. **Others**
- **Access API Reference**: If you know coding, you can get an API Key to integrate this workflow directly into your own website or mobile app
- **Open in Explore**: Pin this app to your workspace sidebar for quick access next time
- **Publish as a Tool**: Package your workflow as a plugin so other Agents can use your Email Assistant as a tool
## Monitor Your App
As the creator, you need to understand the status of this assistant. By monitoring and using logs, you can check the health, performance, and costs.
### The Command Center: Monitoring
Click **Monitoring** on the left sidebar to see how your app is performing.
| Name | Explanation |
| :--------------------- | :----------------------------------------------------------------------------------- |
| Total Messages | How many times users interacted with the AI today. It shows how popular your app is. |
| Active Users | The number of unique people engaging with the AI. |
| Token Usage | How much tokens the AI used. Watch for sudden spikes to control costs. |
| Avg. User Interactions | Do the users ask follow-up questions? |
### The Magnifying Glass: Logs
Logs record the details of every single run: time, input, duration, and output. To access detailed records, click Logs in the left sidebar.
**Why Logs?**
- **Debugging**: User says *It doesn't work*? Check the logs to replay the *crime scene* and see exactly which node failed.
- **Performance**: See how long each node took. Find the blocker that is slowing things down.
- **Understand Users**: Read what users are actually asking. Use this real data to update your Knowledge Base or improve your Prompts.
- **Cost Control**: Check exactly how many tokens a specific run cost.
| Name | Explanation |
| :------------------ | :---------------------------------------------------------- |
| Start Time | The time when the workflow was triggered |
| Status | Success or Failure. |
| Run Time | How long the whole process took. |
| Tokens | The tokens consumed by this run. |
| End User or Account | The specific user ID or account that initiated the session. |
| Triggered By | WebApp interface or called via API. |
You can click on each log entry to view more details. For example, you can identify frequently asked user questions and use them to timely update and modify your Knowledge Base.
Building AI app is a new starting point, and this is the core of **LLMOps** (Large language model operations).
1. **Observe**: Look at the Logs. What are users asking? Are they happy with the answers?
2. **Analyze**: Hallucination happens on certain questions or some tools run out often
3. **Optimize**: Go back to the Canvas. Edit the Prompt, add a document to the Knowledge Base, or tweak the workflow logic
4. **Publish**: Release the upgraded version
By repeating this cycle, your Email Assistant gets smarter and faster.
## Thank You
**Thank you for your time and you're now a Dify builder with a new way of thinking:**
```plaintext wrap
Break down the task → Choose Nodes and Tools → Connect them with the right logic → Monitor and upgrade
```
Now, feel free to open a template in Dify explore. Break it down, analyze it, or start building a workflow that solves a task in your daily work from the scratch.
May your workload get lighter and your imagination goes higher. Happy building with Dify.

Binary file not shown.

After

Width:  |  Height:  |  Size: 536 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 501 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 498 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 149 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 239 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 225 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 210 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 495 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 175 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 325 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Some files were not shown because too many files have changed in this diff Show More