Files
dify-docs/en/use-dify/build/runtime.mdx
2026-02-13 00:29:37 +08:00

71 lines
3.4 KiB
Plaintext

---
title: Runtime
icon: "cube"
---
Runtime is the execution environment where your workflows run. It sets the boundaries for what LLMs can access and do.
Dify offers two runtime environments: **Sandboxed Runtime** and **Classic Runtime**, each optimized for different use cases.
## Overview
<Tabs>
<Tab title="Sandboxed Runtime">
<Check>
**Best for:** Complex tasks where LLMs need autonomy to solve problems their own way. More powerful, but slower and more token-intensive.
</Check>
Sandboxed runtime enables LLMs to **execute commands** in an isolated environment. Anything you can do with commands in a terminal, they can do:
- **Run scripts and programs** - Execute code to process data, generate outputs, or perform any computation
- **Install what's needed** - Download libraries and tools on demand using pip or other package managers
- **Access external resources** - Fetch files from URLs, clone repositories, or retrieve data from external sources
- **Work with files** - Access resources like **[skills](/en/use-dify/build/file-system#skills)** in the [file system](/en/use-dify/build/file-system), process files across formats, and generate multimodal artifacts using scripts and tools
<Tip>
In sandboxed runtime, the Agent node combines the roles of both the LLM and Agent nodes in classic runtime.
For quick, simple tasks that don't need these advanced capabilities, you can disable them by turning off **[Agent Mode](/en/use-dify/nodes/agent#enable-command-execution)** for faster responses and lower token costs.
</Tip>
**LLMs become true agents**. As long as the model has strong tool calling and reasoning abilities, it can determine what commands to run and execute them to complete tasks autonomously.
LLMs are more powerful, and that's exactly why they need to run in a sandbox. The isolated environment gives them enough freedom to work while ensuring safe operations.
<Info>
For the default sandbox provider:
- Dify Cloud uses E2B.
- Self-hosting deployments use SSH VM.
Choose and configure other providers in **Settings** > **Sandbox Provider**.
</Info>
</Tab>
<Tab title="Classic Runtime">
<Check>
**Best for:** Quick, straightforward tasks. Less powerful, but faster and more efficient.
</Check>
Within classic runtime, LLMs do what they do best: analyze information, generate text, reason through problems, and intelligently use pre-configured tools to complete tasks.
Think of it as giving someone a specific toolkit—they're capable, but **limited to what you've provided**.
</Tab>
</Tabs>
## Quick Comparison
| Dimension | Sandboxed Runtime | Classic Runtime |
|:------------------------------|:-------------------------------------|:------------------------------------|
| **Best for** | Complex, autonomous problem-solving | Simple, well-defined tasks |
| **LLM Autonomy** | Runs any command it needs | Uses tools you configure |
| **File System** | ✅ | ❌ |
| **Skills** | ✅ | ❌ |
| **App Export Format** | `.zip` (DSL + resource files) | `.yml` (DSL files) |