mirror of
https://github.com/langgenius/dify-docs.git
synced 2026-03-27 13:28:32 +07:00
* update based on user feedback * update zh and ja docs * modify based on feedback * correct typos Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
567 lines
21 KiB
Plaintext
567 lines
21 KiB
Plaintext
---
|
|
title: "30-Minute Quick Start"
|
|
description: "Dive into Dify through an example app"
|
|
icon: "forward"
|
|
---
|
|
|
|
This step-by-step tutorial will walk you through creating a multi-platform content generator from scratch.
|
|
|
|
Beyond basic LLM integration, you'll discover how to use powerful Dify nodes to orchestrate sophisticated AI applications faster with less effort.
|
|
|
|
By the end of this tutorial, you'll have a workflow that takes whatever content you throw at it (text, documents, or images), adds your preferred voice and tone, and spits out polished, platform-specific social media posts in your chosen language.
|
|
|
|
The complete workflow is shown below. Feel free to refer back to this as you build to stay on track and see how all the nodes work together.
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
## Before You Start
|
|
|
|
<Steps>
|
|
<Step title="Sign in to Dify Cloud">
|
|
|
|
Go to [Dify Cloud](https://cloud.dify.ai) and sign up for free.
|
|
|
|
New accounts on the Sandbox plan include 200 message credits for calling models from providers like OpenAI, Anthropic, and Gemini.
|
|
|
|
<Info>
|
|
Message credits are a one-time allocation and don't renew monthly.
|
|
</Info>
|
|
</Step>
|
|
|
|
<Step title="Set Up the Model Provider">
|
|
|
|
Go to **Settings** > **Model Provider** and install the OpenAI plugin. This tutorial uses `gpt-5.2` for the examples.
|
|
|
|
If you're using Sandbox credits, no API key is required—the plugin is ready to use once installed. You can also configure your own API key and use it instead.
|
|
|
|
</Step>
|
|
|
|
<Step title="Configure the Default Model">
|
|
|
|
1. In the top-right corner of the **Model Provider** page, click **System Model Settings**.
|
|
|
|
2. Set the **System Reasoning Model** to `gpt-5.2`. This becomes the default model in the workflow.
|
|
</Step>
|
|
|
|
</Steps>
|
|
|
|
## Step 1: Create a New Workflow
|
|
|
|
1. Go to **Studio**, then select **Create from blank** > **Workflow**.
|
|
|
|
2. Name the workflow `Multi-platform content generator` and click **Create**. You'll automatically land on the workflow canvas to start building.
|
|
|
|
3. Select the User Input node to start our workflow.
|
|
|
|
## Step 2: Orchestrate & Configure
|
|
|
|
<Note>
|
|
Keep any unmentioned settings at their default values.
|
|
</Note>
|
|
|
|
<Tip>
|
|
Give nodes and variables clear, descriptive names to make them easier to identify and reference.
|
|
</Tip>
|
|
|
|
### 1. Collect User Inputs: User Input Node
|
|
|
|
<Info>
|
|
First, we need to define what information to gather from users for running our content generator, such as the draft text, target platforms, desired tone, and any reference materials.
|
|
|
|
The User Input node is where we can easily set this up. Each input field we add here becomes a variable that all downstream nodes can reference and use.
|
|
</Info>
|
|
|
|
Click the User Input node to open its configuration panel, then add the following input fields.
|
|
|
|
<Accordion title="Reference materials - text">
|
|
- Field type: `Paragraph`
|
|
- Variable Name: `draft`
|
|
- Label Name: `Draft`
|
|
- Max length: `2048`
|
|
- Required: `Yes`
|
|
</Accordion>
|
|
|
|
<Accordion title="Reference materials - files">
|
|
- Field type: `File list`
|
|
- Variable Name: `user_file`
|
|
- Label Name: `Upload File (≤ 10)`
|
|
- Support File Types: `Document`, `Image`
|
|
- Upload File Types: `Both`
|
|
- Max number of uploads: `10`
|
|
- Required: `No`
|
|
</Accordion>
|
|
|
|
<Accordion title="Voice and tone">
|
|
- Field type: `Paragraph`
|
|
- Variable Name: `voice_and_tone`
|
|
- Label Name: `Voice & Tone`
|
|
- Max length: `2048`
|
|
- Required: `No`
|
|
</Accordion>
|
|
|
|
<Accordion title="Target platform">
|
|
- Field type: `Short Text`
|
|
- Variable Name: `platform`
|
|
- Label Name: `Target Platform (≤ 10)`
|
|
- Max length: `256`
|
|
- Required: `Yes`
|
|
</Accordion>
|
|
|
|
<Accordion title="Language requirements">
|
|
- Field type: `Select`
|
|
- Variable Name: `language`
|
|
- Label Name: `Language`
|
|
- Options:
|
|
- `English`
|
|
- `日本語`
|
|
- `简体中文`
|
|
- Required: `Yes`
|
|
</Accordion>
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
### 2. Identify Target Platforms: Parameter Extractor Node
|
|
|
|
<Info>
|
|
Since our platform field accepts free-form text input, users might type in various ways: `x and linkedIn`, `post on Twitter and LinkedIn`, or even `Twitter + LinkedIn please`.
|
|
|
|
However, we need a clean and structured list, like `["Twitter", "LinkedIn"]`, that downstream nodes can work with reliably.
|
|
|
|
This is the perfect job for the Parameter Extractor node. In our case, it uses the gpt-5.2 model to analyze users' natural language, recognize all these variations, and output a standardized array.
|
|
</Info>
|
|
|
|
After the User Input node, add a Parameter Extractor node and configure it:
|
|
|
|
1. In the **Input Variable** field, select `User Input/platform`.
|
|
|
|
2. Add an extract parameter:
|
|
|
|
- Name: `platform`
|
|
|
|
- Type: `Array[String]`
|
|
|
|
- Description: `The platform(s) for which the user wants to create tailored content.`
|
|
|
|
- Required: `Yes`
|
|
|
|
3. In the **Instruction** field, paste the following to guide the LLM in parameter extraction:
|
|
|
|
```markdown INSTRUCTION
|
|
# TASK DESCRIPTION
|
|
Parse platform names from input and output as a JSON array.
|
|
|
|
## PROCESSING RULES
|
|
- Support multiple delimiters: commas, semicolons, spaces, line breaks, "and", "&", "|", etc.
|
|
- Standardize common platform name variants (twitter/X→Twitter, insta→Instagram, etc.)
|
|
- Remove duplicates and invalid entries
|
|
- Preserve unknown but reasonable platform names
|
|
- Preserve the original language of platform names
|
|
|
|
## OUTPUT REQUIREMENTS
|
|
- Success: ["Platform1", "Platform2"]
|
|
- No platforms found: [No platforms identified. Please enter a valid platform name.]
|
|
|
|
## EXAMPLES
|
|
- Input: "twitter, linkedin" → ["Twitter", "LinkedIn"]
|
|
- Input: "x and insta" → ["Twitter", "Instagram"]
|
|
- Input: "invalid content" → [No platforms identified. Please enter a valid platform name.]
|
|
```
|
|
|
|
<Check>
|
|
Note that we've instructed the LLM to output a specific error message for invalid inputs, which will serve as the end trigger for our workflow in the next step.
|
|
</Check>
|
|
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
### 3. Validate Platform Extraction Results: IF/ELSE Node
|
|
|
|
<Info>
|
|
What if a user enters an invalid platform name, like `ohhhhhh` or `BookFace`? We don't want to waste time and tokens generating useless content.
|
|
|
|
In such cases, we can use an IF/ELSE node to create a branch that stops the workflow early. We'll set a condition that checks for the error message from the Parameter Extractor node; if that message is detected, the workflow will route directly to an Output node and end.
|
|
</Info>
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
1. After the Parameter Extractor node, add an IF/ELSE node.
|
|
|
|
2. On the IF/ELSE node's panel, define the **IF** condition:
|
|
|
|
**IF** `Parameter Extractor/platform` **contains** `No platforms identified. Please enter a valid platform name.`
|
|
|
|
3. After the IF/ELSE node, add an Output node to the IF branch.
|
|
|
|
4. On the Output node's panel, set `Parameter Extractor/platform` as the output variable.
|
|
|
|
### 4. Separate Uploaded Files by Type: List Operator Node
|
|
|
|
<Info>
|
|
|
|
Our users can upload both images and documents as reference materials, but these two types require different handling with `gpt-5.2`: images can be interpreted directly via its vision capability, while documents must first be converted to text before the model can process them.
|
|
|
|
To manage this, we'll use two List Operator nodes to filter and split the uploaded files into separate branches—one for images and one for documents.
|
|
</Info>
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
1. After the IF/ELSE node, add **two** parallel List Operator nodes to the ELSE branch.
|
|
|
|
2. Rename one node to `Image` and the other to `Document`.
|
|
|
|
3. Configure the Image node:
|
|
1. Set `User Input/user_file` as the input variable.
|
|
|
|
2. Enable **Filter Condition**: `{x}type` **in** `Image`.
|
|
|
|
4. Configure the Document node:
|
|
1. Set `User Input/user_file` as the input variable.
|
|
|
|
2. Enable **Filter Condition**: `{x}type` **in** `Doc`.
|
|
|
|
### 5. Extract Text from Documents: Doc Extractor Node
|
|
|
|
<Info>
|
|
`gpt-5.2` cannot directly read uploaded documents like PDF or DOCX, so we must first convert them into plain text.
|
|
|
|
This is exactly what a Doc Extractor node does. It takes document files as input and outputs clean, usable text for the next steps.
|
|
</Info>
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
1. After the Document node, add a Doc Extractor node.
|
|
|
|
2. On the Doc Extractor node's panel, set `Document/result` as the input variable.
|
|
|
|
### 6. Integrate All Reference Materials: LLM Node
|
|
|
|
<Info>
|
|
When users provide multiple reference types—draft text, documents, and images—simultaneously, we need to consolidate them into a single, coherent summary.
|
|
|
|
An LLM node will handle this task by analyzing all the scattered pieces to create a comprehensive context that guides subsequent content generation.
|
|
</Info>
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
1. After the Doc Extractor node, add an LLM node.
|
|
|
|
2. Connect the Image node to this LLM node as well.
|
|
|
|
3. Click the LLM node to configure it:
|
|
|
|
1. Rename it to `Integrate Info`.
|
|
|
|
2. Enable **VISION** and set `Image/result` as the vision variable.
|
|
|
|
3. In the system instruction field, paste the following:
|
|
|
|
```markdown wrap
|
|
# ROLE & TASK
|
|
You are a content strategist. Analyze the provided draft and reference materials (if any), then create a comprehensive content foundation for multi-platform social media optimization.
|
|
|
|
# ANALYSIS PRINCIPLES
|
|
- Work exclusively with provided information—no external assumptions
|
|
- Focus on extraction, synthesis, and strategic interpretation
|
|
- Identify compelling and actionable elements
|
|
- Prepare insights adaptable across different platforms
|
|
|
|
# REQUIRED ANALYSIS
|
|
Deliver structured analysis with:
|
|
|
|
## 1. CORE MESSAGE
|
|
- Central theme, purpose, objective
|
|
- Key value or benefit being communicated
|
|
|
|
## 2. ESSENTIAL CONTENT ELEMENTS
|
|
- Primary topics, facts, statistics, data points
|
|
- Notable quotes, testimonials, key statements
|
|
- Features, benefits, characteristics mentioned
|
|
- Dates, locations, contextual details
|
|
|
|
## 3. STRATEGIC INSIGHTS
|
|
- What makes content compelling/unique
|
|
- Emotional/rational appeals present
|
|
- Credibility factors, proof points
|
|
- Competitive advantages highlighted
|
|
|
|
## 4. ENGAGEMENT OPPORTUNITIES
|
|
- Discussion points, questions emerging
|
|
- Calls-to-action, next steps suggested
|
|
- Interactive/participation opportunities
|
|
- Trending themes touched upon
|
|
|
|
## 5. PLATFORM OPTIMIZATION FOUNDATION
|
|
- High-impact: Quick, shareable formats
|
|
- Professional: Business-focused discussions
|
|
- Community: Interaction and sharing
|
|
- Visual: Enhanced with strong visuals
|
|
|
|
## 6. SUPPORTING DETAILS
|
|
- Metrics, numbers, quantifiable results
|
|
- Direct quotes, testimonials
|
|
- Technical details, specifications
|
|
- Background context available
|
|
```
|
|
5. Click **Add Message** to add a user message, then paste the following. Type `{` or `/` to replace `Doc Extractor/text` and `User Input/draft` with the corresponding variables from the list.
|
|
|
|
```markdown USER
|
|
Draft: User Input/draft
|
|
Reference material: Doc Extractor/text
|
|
```
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
### 7. Create Customized Content for Each Platform: Iteration Node
|
|
|
|
<Info>
|
|
Now that the integrated references and target platforms are ready, let's generate a tailored post for each platform using an Iteration node.
|
|
|
|
The node will loop through the list of platforms and run a sub-workflow for each: first analyze the specific platform's style guidelines and best practices, then generate optimized content based on all available information.
|
|
</Info>
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
1. After the Integrate Info node, add an Iteration node.
|
|
|
|
2. Inside the Iteration node, add an LLM node and configure it:
|
|
|
|
1. Rename it to `Identify Style`.
|
|
|
|
2. In the system instruction field, paste the following:
|
|
|
|
```markdown wrap
|
|
# ROLE & TASK
|
|
You are a social media expert. Analyze the platform and provide content creation guidelines.
|
|
|
|
# ANALYSIS REQUIRED
|
|
For the given platform, provide:
|
|
|
|
## 1. PLATFORM PROFILE
|
|
- Platform type and category
|
|
- Target audience characteristics
|
|
|
|
## 2. CONTENT GUIDELINES
|
|
- Optimal content length (characters/words)
|
|
- Recommended tone (professional/casual/conversational)
|
|
- Formatting best practices (line breaks, emojis, etc.)
|
|
|
|
## 3. ENGAGEMENT STRATEGY
|
|
- Hashtag recommendations (quantity and style)
|
|
- Call-to-action best practices
|
|
- Algorithm optimization tips
|
|
|
|
## 4. TECHNICAL SPECS
|
|
- Character/word limits
|
|
- Visual content requirements
|
|
- Special formatting needs
|
|
|
|
## 5. PLATFORM-SPECIFIC NOTES
|
|
- Unique features or recent changes
|
|
- Industry-specific considerations
|
|
- Community engagement approaches
|
|
|
|
# OUTPUT REQUIREMENTS
|
|
- For recognized platforms: Provide specific guidelines
|
|
- For unknown platforms: Base recommendations on similar platforms
|
|
- Focus on actionable, practical advice
|
|
- Be concise but comprehensive
|
|
```
|
|
3. Click **Add Message** to add a user message, then paste the following. Type `{` or `/` to replace `Current Iteration/item` with the corresponding variable from the list.
|
|
|
|
```markdown USER
|
|
Platform: Current Iteration/item
|
|
```
|
|
3. After the Identify Style node, add another LLM node and configure it:
|
|
|
|
1. Rename it to `Create Content`.
|
|
|
|
2. In the system instruction field, paste the following:
|
|
|
|
```markdown wrap
|
|
# ROLE & TASK
|
|
You are an expert social media content creator. Generate publication-ready content that matches platform guidelines, incorporates source information, and follows specified voice/tone and language requirements.
|
|
|
|
# LANGUAGE REQUIREMENT
|
|
- Generate ALL content exclusively in the target language specified in the user message. You MUST write the entire post in that language, regardless of the language of any source materials.
|
|
- No mixing of languages whatsoever
|
|
- Adapt platform terminology to the target language
|
|
|
|
# CONTENT REQUIREMENTS
|
|
- Follow platform guidelines exactly (format, length, tone, hashtags)
|
|
- Integrate source information effectively (key messages, data, value props)
|
|
- Apply voice & tone consistently (if provided)
|
|
- Optimize for platform-specific engagement
|
|
- Ensure cultural appropriateness for the specified language
|
|
|
|
# OUTPUT FORMAT
|
|
- Generate ONLY the final social media post content. No explanations or meta-commentary. Content must be immediately copy-paste ready.
|
|
- Maximum heading level: ## (H2) - never use # (H1)
|
|
- No horizontal dividers: avoid ---
|
|
|
|
# QUALITY CHECKLIST
|
|
✅ Platform guidelines followed
|
|
✅ Source information integrated
|
|
✅ Voice/tone consistent (when provided)
|
|
✅ Language consistency maintained
|
|
✅ Engagement optimized
|
|
✅ Publication ready
|
|
```
|
|
3. Click **Add Message** to add a user message, then paste the following. Type `{` or `/` to replace all inputs with the corresponding variable from the list.
|
|
|
|
```markdown USER
|
|
Platform Name: Current Iteration/item
|
|
Target Language: User Input/language
|
|
Platform Guidelines: Identify Style/text
|
|
Source Information: Integrate Info/text
|
|
Voice & Tone: User Input/voice_and_tone
|
|
```
|
|
|
|
4. Enable structured output.
|
|
|
|
<Info>
|
|
This allows us to extract specific pieces of information from the LLM's response in a more reliable way, which is crucial for the next step where we format the final output.
|
|
</Info>
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
1. Next to **Output Variables**, toggle **Structured** on. The `structured_output` variable will appear below. Click **Configure**.
|
|
|
|
2. In the pop-up schema editor, click **Import From JSON** in the top-right corner, and paste the following:
|
|
|
|
```json
|
|
{
|
|
"platform_name": "string",
|
|
"post_content": "string"
|
|
}
|
|
```
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
4. Click the Iteration node to configure it:
|
|
|
|
1. Set `Parameter Extractor/platform` as the input variable.
|
|
|
|
2. Set `Create Content/structured_output` as the output variable.
|
|
|
|
3. Enable **Parallel Mode** and set the maximum parallelism to `10`.
|
|
|
|
<Check>
|
|
This is why we included `(≤10)` in the label name for the target platform field back in the User Input node.
|
|
</Check>
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
### 8. Format the Final Output: Template Node
|
|
|
|
<Info>
|
|
The Iteration node generates a post for each platform, but its output is a raw array of data (e.g., `[{"platform_name": "Twitter", "post_content": "..."}]`) that isn't very readable. We need to present the results in a clearer format.
|
|
|
|
That's where the Template node comes in—it allows us to format this raw data into well-organized text using [Jinja2](https://jinja.palletsprojects.com/en/stable/) templating, ensuring the final output is user-friendly and easy to read.
|
|
</Info>
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
1. After the Iteration node, add a Template node.
|
|
|
|
2. On the Template node's panel, set `Iteration/output` as the input variable and name it `output`.
|
|
|
|
3. Paste the following Jinja2 code:
|
|
|
|
```
|
|
{% for item in output %}
|
|
# 📱 {{ item.platform_name }}
|
|
{{ item.post_content }}
|
|
|
|
{% endfor %}
|
|
```
|
|
|
|
- `{% for item in output %}` / `{% endfor %}`: Loops through each platform-content pair in the input array.
|
|
- `{{ item.platform_name }}`: Displays the platform name as an H1 heading with a phone emoji.
|
|
- `{{ item.post_content }}`: Displays the generated content for that platform.
|
|
- The blank line between `{{ item.post_content }}` and `{% endfor %}` adds spacing between platforms in the final output.
|
|
|
|
<Tip>
|
|
While LLMs can handle output formatting as well, their outputs can be inconsistent and unpredictable. For rule-based formatting that requires no reasoning, the Template node gets things done in a more stable and reliable way at zero token cost.
|
|
|
|
LLMs are incredibly powerful, but knowing when to use the right tool is key to building more reliable and cost-effective AI applications.
|
|
</Tip>
|
|
|
|
### 9. Return the Results to Users: Output Node
|
|
|
|
1. After the Template node, add an Output node.
|
|
2. On the Output node's panel, set the `Template/output` as the output variable.
|
|
|
|
## Step 3: Test
|
|
|
|
Your workflow is now complete\! Let's test it out.
|
|
|
|
1. Make sure your Checklist is clear.
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
2. Check your workflow against the reference diagram provided at the beginning to ensure all nodes and connections match.
|
|
|
|
3. Click **Test Run** in the top-right corner, fill in the input fields, then click **Start Run**.
|
|
|
|
If you're not sure what to enter, try these sample inputs:
|
|
|
|
- **Draft**: `We just launched a new AI writing assistant that helps teams create content 10x faster.`
|
|
|
|
- **Upload File**: Leave empty
|
|
|
|
- **Voice & Tone**: `Friendly and enthusiastic, but professional`
|
|
|
|
- **Target Platform**: `Twitter and LinkedIn`
|
|
|
|
- **Language**: `English`
|
|
|
|
A successful run produces a formatted output with a separate post for each platform, like this:
|
|
|
|
<Frame>
|
|

|
|
</Frame>
|
|
|
|
<Note>
|
|
Your results may vary depending on the model you're using. Higher-capability models generally produce better output quality.
|
|
</Note>
|
|
|
|
<Tip>
|
|
To test how a node reacts to different inputs from previous nodes, you don't need to re-run the entire workflow. Just click **View cached variables** at the bottom of the canvas, find the variable you want to change, and edit its value.
|
|
</Tip>
|
|
|
|
If you encounter any errors, check the **Last Run** logs of the corresponding node to identify the exact cause of the problem.
|
|
|
|
## Step 4: Publish & Share
|
|
|
|
Once the workflow runs as expected and you're happy with the results, click **Publish** \> **Publish Update** to make it live and shareable.
|
|
|
|
<Warning>
|
|
If you make any changes later, always remember to publish again so the updates take effect.
|
|
</Warning> |